Archive for the ‘IT’ Category

Factoring in Barriers to Entry

As part of understanding what business projects need to do to leverage BPM technology, I’ve been trying to eat my own dog food, so to speak. I’ve been looking at some of the EA processes and trying to model them using BPMN. These processes aren’t terribly complex, but at the same time, there is potential for technology to assist in their execution. They involve email distribution, task assignment, timer-based checks, notifications, etc., all of the same things that a process for a non-IT department may want to leverage too. The problem is that I look at these simple lightweight processes and think about the learning curve required to leverage the typical enterprise BPM suite, and that big barrier to entry is a large inhibitor. Even with using BPMN versus the built-in flowchart template, I can generate the process model in Visio very quickly.

A challenge that the BPM space faces right now is its barrier to entry. There are tools, most prominently SharePoint, that excel at having a low barrier to entry. When a team can quickly create a process model that can orchestrate and manage their work requests via an intranet site, that’s a big win. At the same time, does that low barrier to entry eventually become a boat anchor, either through infrastructure that poorly scales or a lack of more advanced features such as process analytics? On the flip side, does the business need another technology that requires significant development expertise and months-long (or more) projects to utilize? What’s the right path to take?

My opinion is that adoption is more important than sophistication, especially with the rate of technology change today, and the influence of consumer technologies on the enterprise. There is so much an individual user can do today, and human nature is to take the path of least resistance. This doesn’t necessarily mean that we should all pick the tool with the lowest barrier to entry, but it does mean that whatever tool you choose, you must get that barrier to entry to an appropriate point, especially if there are competing technologies that can be used. If your BPM technology requires that every process initiative have the equivalent of a senior developer involved, that could be a big problem if it’s something that the end users could do using Visio, Excel, or anything else. Find a way to lower the barrier.

Book now available via Safari Books Online

Thanks to Google alerts, I found out that my book, SOA Governance, is now available via Safari Books Online. You can access it here. If you enjoy it, consider voting for me as the Packt Author of the Year.

Maturity Levels

While working on a maturity model, a colleague pointed out a potential pitfall to me. The way I had defined the levels, they were too focused on standardized processes, which was not my intent. Indeed, many maturity efforts, such as CMMi, tend to be all about establishing standard processes across the enterprise. The problem with this is that just because you have standard processes doesn’t mean you’re actually getting the intended results from the capability. I’m sure this will ring true with my friend James McGovern who has frequently lambasted CMMi in his blog. So, to fix things, I propose the following maturity levels, and I’d like feedback from my readers.

  1. Not existent/Not applicable: The capability either does not exist at the organization, or is not needed.
  2. Ad hoc: The capability exists at the organization, however, the organization has no idea whether there is consistency in the way it is performed, whether within a team or across teams, there is no way to measure the costs and benefits associated with it, and there are no target costs and benefits associated with it.
  3. Measurable: The capability exists at the organization, and the organization is tracking the costs and benefits associated with it. There is no consistency in how the capability is performed either within teams or across teams, as shown by the measurements. The organization does not have any target costs and benefits associated with it.
  4. Defined: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, and the organization has defined target costs and benefits associated with the capability. There is inconsistency, however, in achieving those costs and benefits. Note that different teams can have different target costs and benefits, if the organization believes that a single, enterprise standard is not in its best interest.
  5. Managed: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, target costs and benefits have been defined, and the teams executing the capability are all achieving those target costs and benefits.
  6. Optimizing: The capability is fully managed and processes exist to continually monitor both the performance of the teams performing the capability as well as the target costs and benefits and make changes as needed, whether it is new targets, new operational models (e.g. switching from a centralized approach to a decentralized approach, relying on a service provider, etc.), new processes, or any other change for the better.

Maturity levels need to show continual improvement, and it can’t be solely about standardizing a process, since it may not need to be standardized across the enterprise, nor may those processes actually achieve the desired cost levels, even though they are standardized. Standardization is one way of getting there, and I’ve tried to make these descriptors be applicable for many paths of getting there. Let me know what you think.

Black/White, Coding/Configuration, and other Shades of Gray

I’ve been going through the TOGAF 9 documentation, and in the Application Software section of the Technical Reference Model, there are two categories that are recognized, Business Applications and Infrastructure Applications. They define these two as follows:

Business applications … implement business processes for a particular enterprise or vertical industry. The internal structure of business applications relates closely to the specific application software configuration selected by an organization.
Infrastructure applications … provide general purpose business functionality, based on infrastructure services.

There’s a lot more to the descriptions than this, but what jumped out at me was the typical black and white breakdown of infrastructure and “not” infrastructure. Normally, it’s application and infrastructure, but since TOGAF uses the term infrastructure application, that obviously won’t work, but you get the point. What I’ve found at the organizations I’ve worked with is that there’s always a desire to draw a black and white line between the world of infrastructure and the application world. In reality, it’s not that easy to draw such a line, because it’s an ever-changing continuum. It’s far easier to see from the infrastructure side, where infrastructure used to mean physical devices, but now clearly involves software solutions ranging from application servers to, as TOGAF 9 correctly calls out in their description of infrastructure applications, commercial of the shelf products.

The biggest challenge in the whole infrastructure/application continuum is knowing when to shift your thinking from coding to configuration. As things become more commoditized and more like infrastructure, your thinking has to shift to that of configuration. If you continue with a coding and customization mentality, you’re likely investing significant resources into an area without much potential for payback. There are parallels between this thinking and the cloud computing and software as a service movements. You should use this thinking when making decisions on where to leverage these technologies and techniques. If you haven’t changed your thinking from coding to configuration, it’s unlikely that you’re going to be able to effectively evaluate SaaS or cloud providers. When things are offered as a service, your interactions with them are going to be a configuration activity based upon the interfaces exposed, and it’s very unlikely that any interface will have as much flexibility as a programming language. If you make good decisions on where things should be configured rather than coded, you’ll be in good shape.

Understanding Your Engagement Model

People who have worked with me know that I’m somewhat passionate about having a well-defined engagement model. More often than not, I think we’ve created challenges for ourselves due to poorly-defined engagement models. The engagement model normally consists of “Talk to Person A” or “Talk to Team B” which means that you’re going to get a different result every time. It also means that interaction is going to be different, because no one is going to come to Person A or Team B with the same set of information, so the engagement is likely to evolve over time. In some cases, this is fine. If you’re part of your engagement model is to provide mentoring in a particular domain, then you need to recognize that the structure of the engagement will likely be time-based rather than information-based, at least in terms of the cost. Think of it as the difference between a fixed-cost standard offering, and a variable cost (usually based on time) from a consulting firm. I frequently recommend that teams try to express their service offerings in this manner, especially when involved in the project estimation process. Define what services should be fixed cost and define what services are variable cost, and what that variance depends on. This should be part of the process for operationalizing a service, and someone should be reviewing the team’s effort to make sure they’ve thought about these concerns.

When thinking about your services in the ITSM sense, it’s good to create a well-defined interface, just as we do in the web service sense. Think about how other teams will interact with your services. In some cases, it may be an asynchronous interaction via artifacts. An EA team may produce reference models, patterns, etc. for other teams to use in their projects. These artifacts are designed in their own timeline, separate from any project, and projects can access them at will. Requests to update them based on new information go into a queue and are executed according to the priority of the EA manager. On the other hand, an architecture review is executed synchronously, with a project team making a request for a review, provided they have the required inputs (an architectural specification, in most cases), with the output being the recommendations of the reviewer, and possibly a formal approval to proceed forward (or not).

If you’re providing infrastructure services, such as new servers, or configuration of load balancers, etc., in addition to the project-based interactions, you must also think about what your services are at run-time. While most teams include troubleshooting services, sometimes the interface is lacking definition. In addition, run-time services need to go beyond troubleshooting. When the dashboard lights are all green, what services do you provide? Do you provide reports to the customers of your services? There’s a wealth of information to be learned by observing the behavior of the system when things are going well, and that information can lead to service improvements, whether yours or someone else’s. Think about this when you’re defining your service.

Thoughts on designing for change

I had a brief conversation with Nick Gall (Twitter: ironick) of Gartner on Twitter regarding designing for change. Back in the early days of SOA, I’m pretty sure that I first heard the phrase, “we need to build things to change” from a Gartner analyst, although I don’t recall which one. Since that time, there’s been a lot of discussion on the subject of designing/building for change, usually tied to a discussion on REST versus WS-*. Yesterday, I stepped back from the debate and thought, “Can we ever design for change, and is that really the right problem?”

As I told Nick, technology and design choices can certain constrain the flexibility that you have. Think about the office building that many of us work in. There was a time when they weren’t big farms of cubicle and they actually had real walls and doors. Did this design work? Yes. Was it flexible enough to meet the needs of an expanding work force? No. I couldn’t easily and quickly create new conference rooms, change the size of spaces, etc. Did it meet all possible changes the company would go through? No. Did the planners ever think that every cubicle would consume the amount of electricity they do today? What about wiring for the Internet? Sometimes those buildings need to be renovated or even bulldozed. The same thing is true on the technology side. We made some design decisions that worked and were flexibility, yet not flexible enough for the change that could not have been easily predicted in most companies, such as the advent of the internet.

Maybe I’m getting wiser as I go through more of these technology changes, but for me, the fundamental problem is not the technology selection. Yes, poor design and technology selection can be limiting, but I think the bigger problem is that we have poor processes for determining what changes are definitely coming, what changes might be coming, and how and when to incorporate those changes into what IT does, despite the available predictions from the various analysts. Instead, we have a reactive, project-driven approach without any sort of portfolio planning and management expertise. To this, I’m reminded of a thought I had while sitting in a Gartner talk on application and project portfolio management a year or two ago. If I’m sitting in a similar session on service portfolio management 5 years from now, we’ve missed the boat and we still don’t get it. Develop a process for change, and it well help you make good, timely design choices. The process for change involves sound portfolio management and rationalization processes.

SOA Governance Book Review

Fellow Twitterer Leo de Sousa posted a review of my book, SOA Governance, on his blog. Leo is an Enterprise Architect at the British Columbia Institute of Technology, and is leveraging the book on their journey in adopting SOA. Thanks for the review, Leo. I’m glad you posted it before the Stanley Cup playoffs begin as my St. Louis Blues will be taking on your Vancouver Canucks, and I wouldn’t have wanted the upcoming Blues victory to taint your review!

SOA Governance Podcast

I recorded a podcast on various SOA Governance topics with Bob Rhubart, Cathy Lippert, and Sharon Fay of Oracle as part of Oracle’s Arch2Arch Podcast series. You can listen to part one via this link, or you can find it at Oracle’s ArchBeat site here.

Governing Anonymous Service Consumers

On Friday, the SOA Chief (Tim Vibbert), Brenda Michelson, and I had a conversation on Twitter regarding SOA governance and anonymous service consumers. Specifically, how do you provide run-time governance for a service that is accessed anonymously?

If you’ve read this blog or my book, you’ll know that my take on run-time SOA governance is the enforcement and/or monitoring of compliance with the policies contained within the service contract. Therein lies the biggest problem: if the service consumer is anonymous, is there a contract? There’s certainly the functional interface, which is part of the contract, but there isn’t any agreement on the allowed request rates, hours of usage, etc. So what do we do?

The first thing to recognize is that while there may not be a formal contract that all consumers have agreed to, there should always be an implied contract. When two parties come to the table to establish an agreement, it’s likely that both sides comes with a contract proposal, and the final contract is a negotiation between the two. The same thing must be considered here. If someone starts using a service, they have some implicit level of service that they expect to receive. Likewise, the service provider knows both the capacity they currently can handle as well as what how they think a typical consumer will use the service. Unfortunately, these implied contracts can frequently be wrong. The advice here is that even if you are trying to lower the barrier for entry by having anonymous access, you still need to think about service contracts and design to meet some base level of availability.

The second thing to do, which may seem obvious, is to avoid anonymous access in the first place. It’s very hard to enforce anything when you don’t know where it’s coming from. Your authorization policy can simply be that you must be an authenticated user to use the service. Even in an internal setting, having some form of identity on the message, even if there are no authentication or authorization policies, becomes critical when you’re trying to understand how the systems are interacting, perform capacity planning, and especially in a troubleshooting scenario. Even services with low barriers to entry, like the Twitter API, often require identity.

The next thing you should do is leverage a platform with elasticity. That is, the available capacity should grow and shrink with the demand. If it’s anonymous, and new consumers can start using it simply by getting the URLs from someone else, you have no control over the rate at which usage will scale. If the implied level of availability is that the service is always available, you’ll need on-demand resources.

Finally, you still need to protect your systems. No request is completely anonymous, and there are things you can do to ensure the availability of your service against rogue consumers. Requests will have source IP addresses on them, so you can look for bad behavior at that level. You can still do schema validation, look for SQL injection, etc. In other words, you still need to do DoS protection. You also should be looking at the usage metrics on a frequent basis to understand the demand curve, and making decisions accordingly.

The Role of the Service Manager

Tony Baer joined the SOA Consortium on one of its working group conference calls this week to discuss his research on connections between ITIL and SOA. Both he and Beth Gold-Bernstein have blogged about the call, Beth focusing on the broader topic of SOA and ITIL, and Tony talking about the topic of service ownership, as these topics were the meat of the conversation between Beth, Tony, and myself.

I’ve spent the past few years thinking about all things SOA, and recently, I completed the ITIL v3 Foundations certification and have been doing a lot of work in the ITIL/ITSM space. When you move away from the technology-side of the discussion and actually talk about the people and process side of the discussion, you’ll find that there are significant similarities between ITIL/ITSM adoption and SOA adoption. Tony had a diagram in his presentation that illustrated this that Beth reproduced on her blog. Having looked at this from both the SOA world of the application developer and the ITIL/ITSM world of IT operations, there’s a lot that we can learn from ITIL in our SOA adoption efforts. Foremost, ITIL defines a role of Service Manager. Anyone who’s listened to my panel discussions and heard my answer to the question, “What’s the one piece of advice you have for companies adopting SOA?” you’ll know that I always answer, “Make sure all your services have owners.” I’ve decided I like the term “Service Manager” better than “Service Owner” at this point, but if you refer to past posts of mine, you can think of these two terms synonymously.

So what does a service manager do? Let’s handle the easy one. Clearly, service management begins with the initial release of the service. The service manager is accountable for defining this release and putting the project in motion to get it out the door. This involves working with the initial service consumer(s) to go over requirements, get the interface defined, build, test, deploy, etc. Clearly, there’s probably a project manager, developers, etc. helping in the effort, but in a RACI model, it’s the service manager who has accountability. The work doesn’t end there, however. Once the service is in production, the service manager must be receiving reports on the service utilization, availability, etc. and always making sure it meets the needs of the consumer(s). In other words, they must ensure that “service” is being provided.

They must also be defining the next release of the service. How does this happen? Well, part of it comes from analysis of current usage, part of it comes from external events, such as a merger, acquisition, or new regulations, and part of it comes from seeking out new customers. Some consumers may come along on their own with new requests. Reading between the lines, however, it is very unlikely that a service manager manages only one service. It is more likely that they manage multiple services within a common domain. Even if it is one service, it’s likely that the service has multiple operations. The service manager is the one responsible for the portfolio of services and their operations, and trying to find the right balance between meeting consumer needs and keeping a maintainable code base. If there’s redundancy, the service manager is the one accountable for managing it and getting rid of it where it makes sense. This doesn’t negate the need for enterprise service portfolio management, because sometimes the redundancy may be spread across multiple service managers.

So what’s the list? Here’s a start. Add other responsibilities via comments.

  • Release Management (a.k.a. Service Lifecycle Management)
  • Production Monitoring
  • Customer (Consumer) Management
  • Service Management
  • Marketing
  • Domain Research: Trends associated with the service domain
  • Domain-Specific Service Portfolio Management

Think hard about this, as it’s a big shift from many IT organizations today. How many organizations have their roles strictly structured around project lifecycle activities, rather than service lifecycle activities? How many organizations perform these activities even at an application level? It’s a definition change to the culture of many organizations.

SOI versus SOA

Anne Thomas Manes’ “SOA is dead” post back at the beginning of the year sparked quite a debate, which is still going strong. On the Yahoo SOA group, the question was asked on exactly what Anne meant by SOI, or Service-Oriented Integration. Here’s my response:

SOI, service oriented integration, is probably best stated as WSOI- Web Services Oriented Integration. It’s simply the act of taking the same integration points that arise in a project and using web services or some other XML over HTTP approach to integrate the systems. Could this constitute a service-oriented application architecture? Absolutely, but in my mind, there is at best incremental benefits in this approach versus some other integration technology.

Because the scope is a single application, it’s unlikely that any ownership domains beyond the application itself were identified, so there won’t be anyone responsible for looking for and removing other redundant service implementations. Because the scope of the services involved didn’t change, only the technologies used, it’s unlikely that the services will have any greater potential for reuse than they would with another integration technology except that XML/HTTP will be more interoperable, than say, Java RMI, if that’s even a concern. To me, SOA must be applied at something larger than a single application to get anything better than these incremental gains. Services should be defined along ownership domains that create accountability for driving the redundancy out of the enterprise where appropriate.

This is why an application rationalization effort or application/service portfolio management is a critical piece of being successful. If it’s just a “gut feel” that there is a lot of waste in the IT systems, arbitrary use of a different integration technology won’t make that go away. Only working to identify the areas of redundancy/waste, defining appropriate ownership domains, and then driving out the redundancy through the use of services will make a significant difference.

Is Twitter the Cloud Bus?

ToddPoken.jpg

Courtesy of Michael Coté, I received a Poken in the mail as one of the lucky listeners to his IT Management and RIA Weekly podcasts. I had to explain to my oldest daughter (and my wife), what a Poken is, and how it’s utterly useless until I run into someone else in St. Louis who happens to have one or go to some conference where someone might have one. Oh well. My oldest daughter was also disappointed that I didn’t get the panda one when she saw it on the website. So, if you happen to own a Poken, and plan on being in St. Louis anytime soon, or if you’re going to be attending a conference that I will be at (sorry, nothing planned in the near future), send me a tweet and we can actually test out this Poken thing.

Speaking of the RIA Weekly podcast, thanks to Ryan Stewart and Coté for the shout-out in episode #46 about my post on RIAs and Portals that was inspired by a past RIA Weekly podcast. More important than the shout-out, however, was the discussion they had with Jeff Haynie of Appcelerator. The three of them got into a conversation about the role of SOA on the desktop, which was very interesting. It was nice to hear someone thinking about things like inter-application communication on the desktop, since the integration has been so focused on the server side for many years. What really got me thinking was Coté’s comment that you can’t build an RIA these days without including a Twitter client inside of it. At first, I was thinking about the need for a standard way for inter-application communication in the RIA world. Way back when, Microsoft and Apple were duking it out over competing ways of getting desktop apps to communicate with each other (remember OpenDoc and OLE?). Now that the pendulum is swinging back toward the world of rich UI’s, it won’t surprise me at all if the conversation around inter-application communication for desktop apps comes up again. What’s needed? Just a simple message bus to create a communication pathway.

In reality, it’s actually several message buses. An application can leverage an internal bus for communication with its own components, a desktop/VM-based bus for communication with other apps on the same host, another bus for communication within a local networking domain, and then possibly a bus in the clouds for communication across domains. Combining this with Coté’s comment made me think, “Why not Twitter?” As Coté suggested, many applications are embedding Twitter clients in them. The direct messaging capability allows point-to-point communication, and the public tweets can act as a general pub-sub event bus. In fact, this is already occurring today. Today, Andrew McAfee tweeted about productivity tools on the iPhone (and elsewhere), and a suggestion was made about Remember The Milk, a web-based GTD program with an iPhone client, and a very open integration model which includes the ability to listen to tweets on Twitter that allow you to add new tasks. There’s a lightweight protocol to follow within the tweet, but for basic stuff, it’s as simple as “d rtm buy tickets in 2 days”. Therefore, if someone is using RTM for task management, some other system can send a tweet to RTM to assign a talk to a Twitter user. The friend/follower structure of Twitter provides a rudimentary security model, but all-in-all, it seems to work with a very low barrier to entry. That’s just cool. Based on this example, I think it’s entirely possible that we’ll start seeing cloud-based applications that rely on Twitter as the messaging bus for communication.

SOA Governance RefCard Now Available

I’m happy to announce I’ve now published a RefCard (reference card) on SOA Governance based on the content in my book from Packt Publishing. If you want to get a taste of what the book has to offer, follow this link over to DZone.com to download it for free.

Don’t Go On an IT Diet, Change Your Behavior

I’ve refrained from incorporating the current economic crisis into my posts… until now. In a recent discussion, I compared the current situation to what many, many people do every new year. They make a resolution to lose weight, go on some fad diet or start going to the fitness center, maybe lose that weight, but then go right back to how their behavior was a few months prior and gain that weight (and potentially more) right back.

Enterprises are in a similar state. Priorities have shifted to where cost containment and cutting are at the top of the list. While the knee-jerk reaction is to stop investing in any long-term initiatives, this could be a risky approach. If I don’t eat for 4 days, I may quickly drop the weight I need to, but guess what? I still need to eat. Not eating for 4 days will only make me more unhealthy, and then when I do eat, the weight will come right back.

These times should not mean that organization drop their efforts to adopt SOA, ITIL/ITSM, or any other long-term initiative. Most of these efforts try to achieve ROI through cost reduction by eliminating redundancy in the enterprise, which is exactly what is needed today! The risk, however, is that these efforts must be held accountable for the goals they claim to achieve. They must also be prepared to adjust their actions to speed up the pace, if it is possible. No one could have predicted the staggering losses we’re seeing, and sometimes it is necessary for a company’s survival to adjust the pace. If these efforts are succeeding in reducing costs, however, we shouldn’t kill them just because they take a longer time to achieve their goals, otherwise we’ll find ourselves back in the same boat when the next change in priorities or goals happen.

The whole point of Enterprise Architecture, SOA, and many of these other strategic IT initiatives is to allow IT to be more agile- to respond more quickly to changes in the business objectives. Guess what? We’re in the middle of a big unprecedented change in our lifetime. My guess is that the best survivors of this meltdown will be organizations that don’t go on a starvation diet, but instead simply recognize that their priorities and goals have changed and execute without significant disruption to the way they utilize IT. If your EA team, SOA efforts, ITIL efforts, or anything else are inefficient and not providing the intended value, then you’re at risk of being cut, but you were probably at risk anyway, now someone just happens to be looking for targets. If EA has been adding value all along, then you’ll probably be a strategic asset that will help your organization weather the storm.

Most Read Posts for 2008

According to Google Analytics, here are the top read posts from my blog for 2008. This obviously doesn’t account for people who read exclusively through the RSS feed, but it’s interesting to know what posts people have stumbled upon via Google search, etc.

10. Governance Does Not Imply Command and Control. This was posted in August of 2008, and intended to change the negative opinion many people have about the term “governance.”

9. To ESB or not to ESB. This was posted in July of 2007, and gave a listing of five different types of ESBs that exist today and how they may (or may not) fit into your environment.

8. Getting Started with SOA Governance. This was posted in September of 2008, just before my book was released. It emphasizes a policy first approach, stressing education over enforcement.

7. Dish DVR Upgrade. This was posted in November of 2007 and had little to do with SOA. It tells the story of how Dish Network pushed out an upgrade to the software on their DVRs that wiped out all of my existing timers, and I missed recording some shows as a result. The lesson for IT: even if you think there’s no chance that a change will impact someone, you still should make them aware that a change is occurring.

6. Most popular posts to date. This is rather humorous. This post from July of 2007 was much like this one. A list of posts that Google Analytics had shown as most viewed since January of 2006. Maybe this one will show up next year. It at least means someone enjoys these summary posts.

5. Dilbert’s Guide to Governance. In this post from June of 2007, I offered some commentary on governance in the context of a Dilbert cartoon that was published around the same timeframe.

4. Service Taxonomy. Based upon an analysis of search keywords people use that result in them visiting my pages, I’m not surprised to see this one here. This was posted in December of 2006, and while it doesn’t provide a taxonomy, it provides two reasons for having taxonomies: determining service ownership and choosing the technical implementation platform. I don’t think you should have taxonomies just to have taxonomies. If the classification isn’t serving a purpose, it’s just clutter.

3. Horizontal and Vertical Thinking. This was posted in May of 2007 and is still one of my favorite posts. I think it really captures the change in thinking that is required for more strategic solutions, however, I also now realize that the challenge is in determining when horizontal thinking is needed and when it is not. It’s not an easy question and requires a broad understanding of the business to answer correctly.

2. SOA Governance Book. This was posted in September of 2008 and is when I announced that I had been working on a book. Originally, this had a link to the pre-order page from the publisher, later updated to include direct links there and to the page on Amazon. You can also get it from Amazon UK, Barnes and Noble, and other online bookstores.

1. ITIL and SOA. Seeing this post come in at number one was a surprise to me. I’m glad to see it up there, however, as it is something I’m currently involved with, and also an area in need of better information. There are so many parallels between these two efforts, and it’s important to eliminate the barriers between the developer/architecture world of SOA and the infrastructure/operations world of ITIL/ITSM. Look for more posts on this subject in 2009.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.