Archive for the ‘Reuse’ Category

SOA and Reuse

In a two-part podcast series, Dave Berry from Oracle’s Fusion Middleware team and Mike van Alst, a consultant with IT-eye, discussed some remarks I made in an earlier OTN Arch2Arch podcast regarding SOA and reuse. Specifically, I tried to de-emphasize the reuse aspect of SOA. Many reuse programs that I’ve seen or read about have two key elements:

  1. Building things in a reusable manner
  2. Making those things visible

While noble goals, these approaches are at significant risk of producing the intended results. The first item has a fundamental problem in that it is all but impossible to define exact what “building in a reusable manner” is. We can use open, interoperable standards rather than closed, proprietary ones, but is this the key barrier to reuse? There’s probably some low hanging fruit that this will capture, but there’s so much more to reuse than this. From a technical standpoint, one must also consider the structures of the information being exchanged and the varying granularity of the information being exchanged, among other things.

On the second item, visibility is important, there’s no doubt about it. But visibility without context will not be successful. It’s a matter of providing the right information at the right time. Too many initiatives that are associated with the collection of IT artifacts, be it reuse, SOA, portfolio management, ITSM, or any of the like, fail because the information is never put into the context of the processes that need that information. How many times have you seen the information collected as part of a fire drill for an immediate need, only to grow stale once that fire drill is completed.

The two things I recommend are service ownership and linkage to key IT processes. If you’ve heard me talk on panel discussions at conferences, you’ll know that my answer to the question, “What’s the one piece of advice you have for companies adopting SOA?” has always been, “Define your service owners.” Someone is given the responsibility for a functional area, providing capabilities to the rest of the organization and accountable for driving out the redundancies that may exist. This is a tricky exercise, because service ownership has a cost associated with it. Expending that cost for a service that is only used by one consumer can lead to waste, so it’s not a silver bullet. It does, however, being the cultural change from a project-driven organization to more of a product-driven/service-driven organization. Without having someone accountable for the elimination of redundancy in a domain and serving the needs of consumers, it won’t happen.

The second piece of advice is the process integration. To avoid creating repositories that see infrequent use after initial population, you have to define the role of that information in the IT processes. If you have a service repository, when do you expect project architects and designers to look into that repository for services that may be appropriate. How about it the strategic planning process? The scoping effort for a project likely begins long before a project architect is assigned? How is the service repository used in those activities? By defining the links with key IT processes and ensuring that those processes are changed to use the repositories involved, with appropriate governance to make sure those changes are occurring, you will make sure that your services are visible, and more importantly, that the right people are looking for them at the right time.

Encouraging Culture Change

In a comment on my “EA and SOA Case Panel” entry, Surekha Durvasula asked me a couple questions. They didn’t come up in the panel discussion, so I thought I’d respond in a separate entry, as the topic should be of interest to many of my readers. She wrote:

Is “reuse” of a business service considered a valuable metric? How does governance influence the “reusability metric”? Did this come up during this SOA panel?

Specifically, I am wondering if service governance has any bearing in terms of not only promoting the usage of a service but also in ensuring that the enhancement of a service is in keeping with the enterprise-worthiness of the service. Often times it is the evolution of the service where cross-domain applicability is sacrificed.

Also, is there a trend in the industry in terms of promoting business service usage via the use of a “rewards program” or in tying it to compensation packages? Have some industries reached a level of maturity in terms of service reuse especially in those industry verticals that are hit with global competition forcing them to reduce overall operations costs and/or to offer novel product offerings?

Let’s take these one at time. On the subject of reuse, I absolutely think that number of consumers is a valuable metric. At the same time, when dealing with reuse, one must be cautious that it isn’t the only metric of interest. I’ve been in meetings with individuals that have made comments like, “If a service isn’t reused, why are you making it a service in the first place?” I strongly disagree with statements like this, as do most pundits in the SOA space. To defend this position, I frequently quote the oft-referenced Credit Suisse SOA efforts, where they stated that their average number of consumers per service was 1.5. This means that there will be many services that aren’t reused, and probably some that are used by many consumers. While reuse is important, we also have to be looking at metrics for agility, which loosely stated, is the ability to respond to business change. This will involve tracking the time it takes to develop solutions. The theory is that by breaking a solution apart into autonomous services, I reduce the number of touch points when the business needs change. In reality, it depends on the type of change. For example, most of us would agree that a separation of presentation logic from the core business processing is a good thing. That being said, there certainly are plenty of changes that will require touching both the presentation logic and the business logic. One of the most difficult parts of SOA is knowing where to draw service boundaries, because the rules are always changing.

Back to the subject- if we have reusable services, what role does governance play in ensuring that the service doesn’t fork into a bunch of one-off, consumer-specific variants? This is a very interesting question, one that I hadn’t thought much about in the past. My gut is telling me that the burden for this belongs with the service manager, not with a governance team. That’s not to say that there shouldn’t be any involvement from the governance group, but I see a much stronger role from governance in establishing the original service boundaries and assigning service ownership. For future versions, the service manager must be the one that can recognize when the service is getting outside of the boundaries that were originally intended, and this will happen. In some cases, boundaries may need to be redefined. In other cases, we may need to push back on the consumers. All of this starts with that service manager. The service manager must balance the needs of the consumer against the cost of service management. Measurements for determining that manager’s performance should include the number of versions currently being managed and the time required to respond to consumer requests. It is then in their best interests to keep the service focused on its original purpose.

Finally, regarding “rewards programs” or incentives, I don’t know that I’ve ever heard of a case study centered around reuse that didn’t involve incentives. SOA is about culture change, and it’s extremely difficult to change culture without incentives. One only need to look at a government to understand how change occurs. No one would be happy if the federal government mandated that all cars sold starting in 2008 had to get 50 mpg or higher. This is the “big stick” approach. I’ve got a big stick and you’ll get whacked with it if you don’t comply. In terms of IT incentives, one manager I worked with summed up the “big stick” approach well, “Your incentive is that you’ll keep your job.” More typically, the government takes a “carrot” approach, at least at the beginning. Tax breaks are granted to companies that produce high mpg vehicles and to consumers that buy them. These incentives may not even cover the added cost of that approach (e.g. does a $500 tax break for 4 years justify spending $3000 more on a vehicle?), but just the fact that they exist can often be enough to encourage the behavior. Only when enough momentum has gathered does the stick come out, essentially stating a policy that is what the majority of the people are doing already. Overall, I think that incentives should be viewed as a short-term tool to get momentum behind the change, but should always be planned for phase-out once the desired behavior is achieved. Have we reached that point with SOA? I’ve yet to see a company that has. Have we reached that point with reusable libraries? Partially. Most developers would not build their own low-level frameworks today. The problem, however, is that multiple frameworks exist, and there’s still strong resistance in many organizations to having a single solution coming from a frameworks team. I heard my first talk on reuse back in 1998, so it’s very clear that widespread culture change takes a long time to do.

Focus on the consumer

The latest Briefings Direct: SOA Insights podcast is now available. In this episode, we discussed semantic web technologies, among other things. One of my comments in the discussion was that I feel that these technologies have struggled to reach the mainstream because we haven’t figured out a way to make it relevant to the developers working on projects. I used this same argument in the panel discussion at The Open Group EA Practitioners Conference on July 23rd. In thinking about this, I realized that there is a strong connection in this thinking and SOA. Simply put, it is all about the consumer.

Back when my day-to-day responsibilities were programming, I had a strong interest in human-computer interaction and user interface design. The reason for this was that the users were the end consumer of the products I was producing. It never ceased to amaze me how many developers designed user interfaces as if they were the consumer of the application, and wound up giving the real consumer (the end user) a very lousy user experience.

This notion of a consumer-first view needs to be at the heart of everything we do. If you’re an application designer, it doesn’t bode well if you consumer hate using your application. Increasingly, more and more choices for getting things done are freely available on the Internet, and there’s no shortage of business workers that are leveraging these tools, most likely under the radar. If you want your users to use your systems, the best path is make it a pleasant experience for them.

If you’re an enterprise architect, you need to ask who the consumers of your deliverable are? If you create a reference architecture that is only of interest to your fellow enterprise architects, it’s not going to help the organization. If anything, it’s going to create tension between the architecture staff and the developers. Start with the consumer first, and provide material for what they need. A reference architecture should be used by the people coming up with a solution architecture for projects. If your reference architecture is not consumable by that audience, they’ll simply go off and do their own thing.

If you are developing a service, you need to put your effort into making sure it can be easily consumed if you want to achieve broad consumption. It is still more likely today that a project will build both service consumer and service provider. As a result, the likelihood is that the service will only be easily consumable by that first consumer, just as that user interface I mentioned earlier was only easily consumed by the developer that wrote it.

How do we avoid this? Simple: know your consumer. Spend some time on understanding your consumer first, rather than focusing all of your attention on knowing your service. Ultimately, your consumers define what the “right” service is, not you. You can look at any type of product on the market today, and you’ll see that the majority of products that are successful are the ones that are truly consumer friendly. Yes, there are successful products that are able to force their will on consumers due to market share that are not considered consumer friendly, but I’d venture a guess that these do not constitute the majority of successful products.

My advice to my readers is to always ask the question, “who needs to use this, and how can I make it easy for them?” There are many areas of IT that may not be directly involved with project activities. If you don’t make that work relevant to project activities, it will continue to sit off on an island. If you’re in a situation where you’re seen as an expert in some space, like semantic technologies, and the model for using those technologies on project is to have yourself personally involved with those projects, that doesn’t scale. Your efforts will not be successful. Instead, focus on how to make the technology relevant to the problems that your consumers need to solve, and do it in a way that your consumers want to use it, because it makes their life easier.

Most popular posts to date

It’s funny how these syndicated feeds can be just like syndicated TV. I’ve decided to leverage Google Analytics and create a post with links to the most popular entries since January 2006. My blog isn’t really a diary of activities, but a collection of opinions and advice that hopefully remain relevant. While the occasional Google search will lead you to find many of these, many of these items have long since dropped off the normal RSS feed. So, much like the long-running TV shows like to clip together a “best of” show, here’s my “best of” entry according to Google Analytics.

  • Barriers to SOA Adoption: This was originally posted on May 4, 2007, and was in response to a ZapThink ZapFlash on the subject.
  • Reusing reuse…: This was originally posted on August 30, 2006, and discusses how SOA should not be sold purely as a means to achieve reuse.
  • Service Taxonomy: This was originally posted on December 18, 2006 and was my 100th post. It discusses the importance and challenges of developing a service taxonomy.
  • Is the SOA Suite good or bad? This was originally posted on March 15, 2007 and stresses that whatever infrastructure you select (suite or best-of-breed), the important factor is that it fit within a vendor-independent target architecture.
  • Well defined interfaces: This post is the oldest one on the list, from February 24, 2006. It discusses what I believe is the important factor in creating a well-defined interface.
  • Uptake of Complex Event Processing (CEP): This post from February 19, 2007 discusses my thoughts on the pace that major enterprises will take up CEP technologies and certainly raised some interesting debate from some CEP vendors.
  • Master Metadata/Policy Management: This post from March 26, 2007 discusses the increasing problem of managing policies and metadata, and the number of metadata repositories than may exist in an enterprise.
  • The Power of the Feedback Loop: This post from January 5, 2007 was one of my favorites. I think it’s the first time that a cow-powered dairy farm was compared to enterprise IT.
  • The expanding world of the “repistry”: This post from August 25, 2006 discusses registries, repositories, CMDBs and the like.
  • Preparing the IT Organization for SOA: This is a June 20, 2006 response to a question posted by Brenda Michelson on her eBizQ blog, which was encouraging a discussion around Business Driven Architecture.
  • SOA Maturity Model: This post on February 15, 2007 opened up a short-lived debate on maturity models, but this is certainly a topic of interested to many enterprises.
  • SOA and Virtualization: This post from December 11, 2006 tried to give some ideas on where there was a connection between SOA and virtualization technologies. It’s surprising to me that this post is in the top 5, because you’d think the two would be an apples and oranges type of discussion.
  • Top-Down, Bottom-Up, Middle-Out, Outside-In, Chicken, Egg, whatever: Probably one of the longest titles I’ve had, this post from June 6, 2006 discusses the many different ways that SOA and BPM can be approached, ultimately stating that the two are inseparable.
  • Converging in the middle: This post from October 26, 2006 discusses my whole take on the “in the middle” capabilities that may be needed as part of SOA adoption along with a view of how the different vendors are coming at it, whether through an ESB, an appliance, EAI, BPM, WSM, etc. I gave a talk on this subject at Catalyst 2006, and it’s nice to see that the topic is still appealing to many.
  • SOA and EA… This post on November 6, 2006 discussed the perceived differences between traditional EA practitioners and SOA adoption efforts.

Hopefully, you’ll give some of these older items a read. Just as I encouraged in my feedback loop post, I do leverage Google Analytics to see what people are reading, and to see what items have staying power. There’s always a spike when an entry is first posted (e.g. my iPhone review), and links from other sites always boost things up. Once a post has been up for a month, it’s good to go back and see what people are still finding through searches, etc.

Horizontal and Vertical Thinking

I’ve been meaning to post on this subject for some time, so it’s good that I got to the airport a little earlier than normal today. There’s nothing like blogging at 5:30 in the morning.

As I mentioned in my last entry, I just finished listening to a podcast from IT Conversations on the drive to the airport which was a discussion on user experience with Irene Au, Director of User Experience for Google. One of the questions she took from the audience dealt with the notion of having a centralized group for User Experience, or whether it should be a decentralized activity. This question is one that frequently comes up in SOA discussions, as well. Should you have a centralized service development, or should your efforts be decentralized? There’s no right or wrong answer to this question, however, it’s certainly true that your choices can impact your success. In the podcast, Irene discussed the matrixed approach at Yahoo, and how everything would up being funded by business units. This made it difficult to do activities for the greater good, such as style guides, etc. The business unit wanted to maximize their investment and have those resources focused on their activities, not someone else’s. Putting this same topic in the context of SOA, this would be the same as having user-facing application teams developing services. The challenge is that the business unit wants that user-facing application, and they want it yesterday. How do we create services that aren’t solely of value to just that application. At the opposite extreme, things can be centralized. Irene discussed the culture of open office hours at Google, and how she’ll have a line of people outside her office with their user experience questions in hand. While this may allow her to maintain a greater level of consistency, resource management can be a big challenge, as you are being pulled in multiple directions. Again, putting this in the SOA context, the risk is that in the quest for the perfect enterprise service, you can put individual project schedules at risk as they wait for the service they are dependent on. These are both extreme positions, and seldom is an organization at one extreme or the other, but usually somewhere in the middle.

HorizVert.pngIn trying to tackle this problem, it’s useful to think of things as either horizontal or vertical. Horizontal concepts are ones where breadth is the more important concern. For example, infrastructure is most frequently presented as a horizontal concern. I can take a four CPU server and run just about anything I’d like on it these days, meaning it provides broad coverage across a variety of functional domains. A term frequently used these days is commodity hardware, and the notion of commoditization is a characteristic of horizontal domains. When breadth becomes more important that depth, there’s usually not many opportunities for innovation. Largely, activities become more focused on reducing the cost by leveraging economies of scale. Commoditization and standardization go hand in hand, as it’s difficult to classify something as a commodity that doesn’t meet some standard criteria. In the business world, these horizontal areas are also ones that are frequently outsourced, as all companies typically do it the same way meaning there is little room for competitive differentiation.

Vertical concepts are ones where depth is the more important concern. In contrast to the commoditization associated with horizontal concerns, vertical items are ones where innovation can occur and where companies can set themselves apart from their competitors. User experience is still an area where significant differentiation can occur, so most user-facing applications fall into this category. Business knowledge, customer experience (preferably at a partnership level to have them involved in the process), are keys at this level.

By nature, vertical and horizontal concerns are orthogonal and can create tension. I have a friend who works as a user experience consultant and he once asked me about how to balance the concerns that may come from a user experience focus, clearly rooted in the vertical domains with the concerns of SOA, which are arguably focused on more horizontal concerns. There’s no easy answer to this. Just as the business must make decisions over time on where to commoditize and where to specialize, the same holds true for IT. Apple is a great example to look at, as their decision to not commoditize in their early days clearly resulted in them being relegated to niche (but still relevant) status in computer sales. Those same principles, however, to remain more vertically-focused with tight top-to-bottom controls have now resulted in their successes with their computers, iTunes, Apple TV, the iPod, and the forthcoming iPhone. There are a number of ways to be successful, although far fewer ways than there are to be unsuccessful.

When trying to slice up your functional domains into domains of services, you must certainly align it with the business goals. If there is an area of the business where they are trying to create competitive differentiation, this is probably not the best area to look for services that will have broad enterprise reuse, although it is very dependent on whether technology plays a direct role in that differentiation or an indirect role, such as whether the business to consumer interaction is solely through a website, or if it is through a company representative (e.g. a financial advisor). These areas that are closest to the end user are likely to require some degree of verticality to allow for tighter controls and differentiation. That’s not to say they own the entire solution, top to bottom, however, which would be a monolith.

As we go deeper into the stack, you should look for areas where commoditization and standardization outweighs the benefits of customization. It may begin at a domain level, such as integration across a suite of applications for a single business unit, with each successive level increasing the breadth of coverage. There is no point where the vertical solutions stop, and everything beneath it has enterprise breadth. Rather, it is a continuum of decreasing emphasis on depth and increasing emphasis on breadth. A Internet company may try to differentiate themselves in their end-user facing systems that the users interact with, allowing a large degree of autonomy for each product line. The supporting services behind those user interfaces will increase in the breadth of their scope, but still may require some degree of specialization, such as having to deal with a particular region of a country or even the world for global organizations. We then bleed into the area of “applistructure” and solutions that fall more into the support arena. A CRM system will have close ties to the end-user facing sales portal. The breadth of CRM coverage may need to span multiple lines of business, unlike the sales portal, where specialization could occur. Going deeper, we have applications that may have no ties to the end-user facing systems, but are still necessary to run a business, such as Human Resources. Interestingly, if you’re following my logic you may be thinking that these things should all be outsourced, but the truth is that many of these areas are still far from being commoditized. That’s because they involve user facing components, which brings us back to those vertical domains where customization can add value. An organization that outsources the technology side of HR, but doesn’t have an associated reduction in HR staff may have a potential conflict when they want to have specialized HR processes, but are dealing with commodity interfaces and systems. Put simply, you can’t have it both ways.

The trend continues on down the stack to the infrastructure and the world of the individual developer. If you’re truly wanting to adopt SOA from top to bottom, there should be a high degree of commoditization and standardization at this level. Organizations where solutions are still built top-to-bottom, with customized hardware platforms, source code management, programming languages, etc. are going to struggle with SOA, because their culture is vertically-oriented to an extreme.

While the speed of change, business decisions on what things are core competencies and what things are not do not change overnight. Taking an organization where each product group had its own staff (vertically-oriented) and switching it to a centralized sales organization (horizontally-oriented) is a significant cultural change, and often doesn’t go smoothly. You only need to look at the number of mergers and acquisitions that have been deemed successful (less than 50%) to understand the difficulty. Switching from vertically-focused IT solutions to horizontally-focused IT solutions is just as difficult, if not more difficult. Humans are significantly more adaptable than technology solutions, which at the core, are binary, yes/no environments. The important thing is to recognize when misalignment is occurring and take action to fix it. That’s all about governance. If users are trying to apply significant customization to a technology area that has been deemed as a commodity by the business, push back needs to occur to emphasis that the greater good takes precedence over individual needs. If IT is taking far too long to deliver a solution in an area where time to market and competitive differentiation is critical, remove the barriers and give that group more control over the solution, at the expense of broader applicability. If you don’t know what your priorities and principles are, however, for each domain, you’ll end up in and endless sequence of meetings that are rooted in opinions, rather than documented principles and behaviors desired by the organization.

Parallel Development and Integration

One topic that’s come up repeatedly in my work is that of parallel development of service consumers and service providers. While over time, we would expect these efforts to become more and more independent, many organizations are still structured to be application development organizations. This typically means that services are likely identified as part of that application project, and therefore, will be developed in parallel. The efforts may all be under one project manager, or the service development efforts may be spun off as independently managed projects. Personally, I prefer the latter, as I think it increases the chances of keeping the service independent of the consumer, as well as establish clear service ownership from the beginning. Regardless of your approach, there is a need to manage the development efforts so that chaos doesn’t ensue.

To paint a picture of the problem, let’s look at a popular technique today- continuous integration. In a continuous integration environment, there are a series of automated builds and tests that are run on a scheduled basis using tools like Cruise Control, ant/Nant, etc. In this environment, shortly after someone checks in some code, a series of tests are run that will validate whether any problems have been introduced. This allows problems to be identified very early in the process, rather than waiting for some formal integration testing phase. This is a good practice, if for no other reason than encouraging personal responsibility for good testing from the developers. No one likes to be the one who breaks the build.

The challenge this creates with SOA, however, is that the service consumer and the service provider are supposed to be independent of each other. Continuous integration makes sense at the class/object level. The classes that compose a particular component of the system are tightly coupled, and should move in lock step. Service consumers and providers should be loosely coupled. They should share contract, not code. This contract should introduce some formality into the consumer/provider relationship, rather than viewing in the same light as integration between two tightly coupled classes. What I’ve found is that when the handoffs between a service development team and a consumer development team are not formalized, sooner or later, it turns into a finger-pointing exercise because something isn’t working they way they’d like, typically due to assumptions regarding the stability of the service. Often times, the service consumer is running in a development environment and trying to use a service that is also running in a development environment. The problem is that development environments, by definition, are inherently unstable. If that development environment is controlled by the automated build system, the service implementation may be changing 3 or more times a day. How can a service consumer expect consistent behavior when a service is changing that frequently? Those automated builds often include set up and take down of testing data for unit tests. The potential exists that incoming requests from a service consumer not associated with those tests may cause the unit testing to fail, because it may change the state of the system. So how do we fix the problem? I see two key activities.

First, you need to establish a stable integration environment. You may be thinking, “I already have an integration testing environment,” but is that environment used for integration with things outside of the project’s control, or is that used for integration of the major components within the project’s control. My experience has been the latter. This creates a problem. If the service development team is performing their own integration testing in the IT environment, say with a database dependency, they’re testing things they need to integrate with, not things that want to integrate with them. If the service consumer uses the service in that same IT environment, that service is probably not stable, since it’s being tested itself. You’re setting yourself up for failure in this scenario. The right way, in my opinion, to address this is to create one or more stable integration environments. This is where service (and other resources) are deployed when they have a guaranteed degree of stability and are “open for business.” This doesn’t mean they are functionally complete, only that the service manager has clearly stated what things work and what things don’t. The environment is dedicated for use by consumers of those services, not by the service development team. Creating such an environment is not easily done, because you need to manage the entire dependency chain. If a consumer invokes a service that updates a database and then pushes a message out on a queue for consumption by that original consumer, you can have a problem if that consumer is pointing at a service in one environment, but a MOM system in another environment. Overall, the purpose of creating this stable integration environment is to manage expectations. In an environment where things are changing rapidly, it’s difficult to set any expectation other than that the service may change out from underneath you. That may work fine where 4 developers are sitting in cubes next to each other, but it makes it very difficult if the service development team is in an offshore development center (or even on another floor of the building) and the consumer development team is located elsewhere. While you can manage expectations without creating new environments, creating them makes it easier to do so. This leads to the second recommendation.

Regardless of whether you have stable integration environments or not, the handoffs between consumer and provider need to be managed. If they are not, your chances of things going smoothly will go down. I recommend creating a formal release plan that clearly shows when iterations of the service will be released for integration testing. It should also show cutoff dates for when feature requests/bug reports must be received in order to make it into a subsequent iteration. Most companies are using iterative development methodologies, and this doesn’t prevent that from occurring. Not all development iterations should go into the stable environment, however. Odds are, the consumer development (especially if there’s more than one) and the service development are not going to have their schedules perfectly synchronized. As a result, the service development team can’t expect that a consumer will test particular features within a short timeframe. So, while a development iteration may occur every 2 weeks, maybe every third iteration goes into a stable integration environment, giving consumers 6 weeks to perform their integration testing. You may only have 3 or 4 stable integration releases of a service within its development lifecycle. Each release should have formal release notes and set clear expectations for service consumers. Which operations work and which ones don’t? What data sets can be used? Can performance testing be done? Again, problems happen when expectations aren’t managed. The clearer the expectations, the more smoothly things can go. It also makes it easier to see who dropped the ball when something does go wrong. If there’s no formal statement regarding what’s available within a service at any particular point in time, you’ll just get a bunch of finger pointing that will expose the poor communication that has happened.

Ultimately, managing expectations is the key to success. The burden of this falls on the shoulders of the service manager. As a service provider, the manager is responsible for all aspects of the service, including customer service. This applies to all releases of a service, not just the ones in production. Providing good customer service is about managing expectations. What do you think of products that don’t work they way you expect them to? Odds are, you’ll find something else instead. Those negative experiences can quickly undermine your SOA efforts.

The Reuse Marketplace

Marcia Kaufman, a partner with Hurwitz & Associates, posted an article on IT-Director.com entitled “The Risks and Rewards of Reuse.” It’s a good article, and the three recommendations can really be summed up in one word: governance. While governance is certainly important, the article misses out on another important, perhaps more important, factor: marketing.

When discussing reuse, I always refer back to a presentation I heard at SD East way back in 1998. Unfortunately, I don’t recall the speaker, but he had established reuse programs at a variety of enterprises, some successful and some not successful. He indicated that the factor that influenced success the most was marketing. If the groups that had reusable components/services/whatever were able to do an effective job in marketing their goods and getting the word out, the reuse program as a whole would be more successful.

Focusing in on governance alone still means those service owners are sitting back and waiting for customers to show up. While the architectural governance committee will hopefully catch a good number of potential customers and send them in the direction of the service owner, that committee should be striving to reach “rubber stamp” status, meaning the project teams should have already sought out potential services for reuse. This means that the service owners need to be marketing their services effectively so that they get found in the first place. I imagine the potential customer using Google searches on the service catalog, but then within the service catalog, you’d have a very Amazon-like feel that may say things like “30% of other customers found this service interesting…” Service owners would be monitoring this data to understand why consumers are or are not using their services. They’d be able to see why particular searches matched, what information the customer looked at, and know whether the customer eventually decided to use the service/resource or not. Interestingly, this is exactly what companies like Flashline and ComponentSource were trying to do back in the 2000 timeframe, with Flashline having a product to establish your own internal “marketplace” while ComponentSource was much more of a hosted solution intended at a community broader than the enterprise. With the potential to utilize hosted services always on the rise, this makes it even more interesting, because you may want your service catalog to show you both internally created solutions, as well as potential hosted solutions. Think of it as amazon.com on the inside + with amazon partner content integrated from the outside. I don’t know how easily one could go about doing this, however. While there are vendors looking at UDDI federation, what I’ve seen has been focused on internal federation within an enterprise. Have any of these vendors worked with say, StrikeIron, so that hosted services show up in their repository (if the customer has configured it to allow them)? Again, it would be very similar to amazon.com. When you search for something on Amazon, you get some items that come from amazon’s inventory. You also get links to Amazon partners that have the same products, or even products that are only available from partners.

This is a great conceptual model, however, I do need to be a realist regarding the potential of such a robust tool today. How many enterprises have a service library large enough to warrant establishing this rich of a marketplace-like infrastructure? Fortunately, I do think this can work. Reuse is about much more than services. If all of your reuse is targeted at services, you’re taking a big risk with your overall performance. A reuse program should address not only service reuse, but also reuse of component libraries, whether internal corporate libraries or third-party libraries, and even shared infrastructure. If your program addresses all IT resources that have the potential for reuse, now the inventory may be large enough to warrant an investment in such a marketplace. Just make sure that it’s more than just a big catalog. It should provide benefit not only for the consumer, but for the provider as well.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.