Archive for the ‘Management’ Category

What are your EA Services?

A week or so ago, I asked about defining on EA services on Twitter. My use of the term services here is in more of the ITIL/ITSM sense, not what typically comes to mind when discussing SOA, but I could have another blog post just dedicated to that subject. I’ve been working to define EA services at work, and it’s been a very interesting exercise, and I had hoped that other EA’s (or former EA’s) on Twitter would have something to contribute.

Something that is a bit surprising to me, is that many IT teams struggle to explain exactly what they do, especially those whose primary purpose isn’t project work. This doesn’t mean that the team isn’t needed, but it does put you at risk of having the team be defined by the individuals rather than by their responsibilities. Depending on who the individual is, you get a different set of capabilities, making it difficult to quantify and measure what the team, rather than the individual does. A conversation with my boss and another team member on a different subject brought up the term “define by example” and we all agreed that it’s usually a bad thing. Examples are very important in illustrating the concept, but they shouldn’t be the definition. The same thing goes for a team. Your team should not by defined by individuals on it, but rather the individuals on it should be providing the defined services.

Getting back to the subject, the initial list of EA services I came up with, and vetted by Aleks Buterman, Leo de Sousa, and Brenda Michelson are:

  • Architectural Assessment Services: The operations in this service include anything that falls into the review/approve/comment category, whether required or requested. Ad hoc architectural questions probably go here, but those are one that I’m still sitting on the fence about.
  • Architectural Consulting Services: The operations in this service include anything where a member of the EA team is allocated to a project as a member of that project team, typically as a project architect. The day-to-day activities of that person would now be managed by a project manager, at least to the extent of the allocation.
  • Architectural Research Services: The operations in this service are those that fall into the research category, whether formal or informal. This would include vendor conversations, reading analyst reports, case study reviews, participation in consortiums, etc.
  • Architectural Reference Services: The operations in this service are those that entail the creation of reference material used for prescriptive guidance of activities outside of the EA team, such as patterns, reference models, reference architectures, etc.
  • Architectural Standards Services: Very similar to reference services, this service is about the creation of official standards. I’m still on the fence as to whether or not this should be collapsed into a single service with the reference services. Sometimes, standards are treated differently than other reference material, so I’m leaving it as its own service for now.
  • Architectural Strategy Services: Finally, strategy services capture the role of architecture in strategy development, such as the development of to-be architectures. If there is a separate strategy development process at your organization, this one represents the role of enterprise architecture in that process.

Now, the most interesting part of this process has not been coming up with this list, but thinking about the metadata that should be included for each of these services. Thinking like a developer, what are the inputs and outputs of each? Who can request them? Are any internal services (e.g. always requested by the EA manager) only, and which ones are external services (e.g. requested by someone outside of EA)? What are the processes behind these services? Are these services always part of a certain parent process, or are they “operations” in multiple processes? How do we measure these services? You can see why this suddenly feels very much like ITIL/ITSM, but it also has parallels to how we should think about services in the SOA sense, too. Thinking in the long term, all of these services need to be managed. What percentage of work falls into each bucket? Today, there may be a stronger need to establish solid project architecture, leading to a higher percentage of time spent consulting. Next year, it may shift to strategy services or some other category. The year after that, the service definitions themselves may need to be adjusted to account for a shift toward more business architecture and less technology architecture. Adjusting to the winds of change is what service management is all about.

So, my question to my readers is, what are your EA services? I’m sure I’m not the only EA out there who’s had to think about this. Even if your EA organization hasn’t, the next time you fill out your time card, think about what “service bucket” your efforts fall into. Do my categories make sense for what you do each week or month? If not, what’s missing. If it’s unclear which bucket something should go in, how would you redefine them? A consistent set of EA service definitions can definitely help all of us.

Understanding Your Engagement Model

People who have worked with me know that I’m somewhat passionate about having a well-defined engagement model. More often than not, I think we’ve created challenges for ourselves due to poorly-defined engagement models. The engagement model normally consists of “Talk to Person A” or “Talk to Team B” which means that you’re going to get a different result every time. It also means that interaction is going to be different, because no one is going to come to Person A or Team B with the same set of information, so the engagement is likely to evolve over time. In some cases, this is fine. If you’re part of your engagement model is to provide mentoring in a particular domain, then you need to recognize that the structure of the engagement will likely be time-based rather than information-based, at least in terms of the cost. Think of it as the difference between a fixed-cost standard offering, and a variable cost (usually based on time) from a consulting firm. I frequently recommend that teams try to express their service offerings in this manner, especially when involved in the project estimation process. Define what services should be fixed cost and define what services are variable cost, and what that variance depends on. This should be part of the process for operationalizing a service, and someone should be reviewing the team’s effort to make sure they’ve thought about these concerns.

When thinking about your services in the ITSM sense, it’s good to create a well-defined interface, just as we do in the web service sense. Think about how other teams will interact with your services. In some cases, it may be an asynchronous interaction via artifacts. An EA team may produce reference models, patterns, etc. for other teams to use in their projects. These artifacts are designed in their own timeline, separate from any project, and projects can access them at will. Requests to update them based on new information go into a queue and are executed according to the priority of the EA manager. On the other hand, an architecture review is executed synchronously, with a project team making a request for a review, provided they have the required inputs (an architectural specification, in most cases), with the output being the recommendations of the reviewer, and possibly a formal approval to proceed forward (or not).

If you’re providing infrastructure services, such as new servers, or configuration of load balancers, etc., in addition to the project-based interactions, you must also think about what your services are at run-time. While most teams include troubleshooting services, sometimes the interface is lacking definition. In addition, run-time services need to go beyond troubleshooting. When the dashboard lights are all green, what services do you provide? Do you provide reports to the customers of your services? There’s a wealth of information to be learned by observing the behavior of the system when things are going well, and that information can lead to service improvements, whether yours or someone else’s. Think about this when you’re defining your service.

Thoughts on designing for change

I had a brief conversation with Nick Gall (Twitter: ironick) of Gartner on Twitter regarding designing for change. Back in the early days of SOA, I’m pretty sure that I first heard the phrase, “we need to build things to change” from a Gartner analyst, although I don’t recall which one. Since that time, there’s been a lot of discussion on the subject of designing/building for change, usually tied to a discussion on REST versus WS-*. Yesterday, I stepped back from the debate and thought, “Can we ever design for change, and is that really the right problem?”

As I told Nick, technology and design choices can certain constrain the flexibility that you have. Think about the office building that many of us work in. There was a time when they weren’t big farms of cubicle and they actually had real walls and doors. Did this design work? Yes. Was it flexible enough to meet the needs of an expanding work force? No. I couldn’t easily and quickly create new conference rooms, change the size of spaces, etc. Did it meet all possible changes the company would go through? No. Did the planners ever think that every cubicle would consume the amount of electricity they do today? What about wiring for the Internet? Sometimes those buildings need to be renovated or even bulldozed. The same thing is true on the technology side. We made some design decisions that worked and were flexibility, yet not flexible enough for the change that could not have been easily predicted in most companies, such as the advent of the internet.

Maybe I’m getting wiser as I go through more of these technology changes, but for me, the fundamental problem is not the technology selection. Yes, poor design and technology selection can be limiting, but I think the bigger problem is that we have poor processes for determining what changes are definitely coming, what changes might be coming, and how and when to incorporate those changes into what IT does, despite the available predictions from the various analysts. Instead, we have a reactive, project-driven approach without any sort of portfolio planning and management expertise. To this, I’m reminded of a thought I had while sitting in a Gartner talk on application and project portfolio management a year or two ago. If I’m sitting in a similar session on service portfolio management 5 years from now, we’ve missed the boat and we still don’t get it. Develop a process for change, and it well help you make good, timely design choices. The process for change involves sound portfolio management and rationalization processes.

What are the Services?

I recently completed a certification in ITIL v3 Foundations. On the plus side, I found that the ITIL framework provided some great structure around the concept of service management that is very applicable to SOA. There was one key question, however, that I felt was left unanswered. What are the services?

My assumption going in was that ITIL was very much about running IT operations within an enterprise, so I expected to see some sort of a service domain model associated with the “business of IT.” That’s not the case, at least not in the material I was given. There are a number of roles defined that are clearly IT specific, but overall, I’d say that many of the processes and functions presented were not specific to IT at all. As an example, ITIL foundations won’t tell you whether server provisioning or application deployment should be services in your catalog or not. Without this, an effort to adopt ITIL can struggle in the same way as an SOA adoption effort can. I’ve seen first hand where an organization thrashed around what the right operational and engineering services were. ITIL does offer the right guidance in helping you define them, in that it begins with understanding your customer.

This is the same question where many SOA initiatives struggle. We can have lots of conceptual talk about how to build services the right way, but actually defining the services that should be built is a challenge. In both ITIL and SOA adoption, there is a penalty for defining too many services. It’s probably much more pronounced in ITIL, because those services likely have a higher cost since managing and using those services tends to have a higher cost in the manual effort than managing and using a web service, although, if you’re doing business-driven SOA, the costs may be very similar.

Overall, I definitely felt there is a lot of value in the ITIL v3 framework, and I think if you are leading an SOA adoption effort, it’s worth learning about, as it will help your efforts. If you’re looking to improve IT operations, it will likewise help your efforts. Just know that there you’ll still need to figure out what your services are on your own, and that can have a big impact on the success of your adoption efforts.

Sizing your Center of Excellence

I recently was asked about the proper size for a competency center/Center of Excellence. While many might immediately try to base the size of the group on size of the organization as a whole, my recommendation was based on two key factors:

  1. The engagement model of the group
  2. The current IT organizational structure and paths of information flow

The engagement model is important, because it has a direct impact on the ability of the group to fulfill its mission. Will your centralized team strictly be responsible for establishing policies and perhaps play a role in reviews and approvals? Or, will your team take a more hands-on role, either as a resource center for projects building or consuming services or as an outsourcing center for service development efforts? The first variety is almost completely divorced from the projects being executed in the organization, and as a result, its size should not be influenced by the number of concurrent projects. The latter two varieties, however, are directly involved with projects. Choose too small, and you’ll have projects executing without your involvement, potentially running down different paths. Choose too large, and you’ll have people sitting idle or doing other non-service related work on projects. Personally, I’m not a big fan of the outsourcing model, but that’s a subject for another blog.

The second factor, and certainly the more important one from a governance perspective, is the way information flows through your organization. If you are establishing a Center of Excellence or Competency Center as part of your SOA governance efforts, the first two processes of governance are establishing policy and then educating and communicating those policies outward. Every organization is different. In an organization where information does not flow easily, and collaborative sessions quickly become fruitless due to the multitude of personalities and opinions that exist, a small group of people may work better, but that group will need to do a lot more of the work to communicate and educate, ranging from IT-wide conversations to more intimate conversations with the small teams at the bottom of the organization chart. In an organization where information does flow more freely, it may be easier to bring in a broader set of people to ensure the full organization is represented, but push the burden off to each individual member to cascade the message throughout their respective areas.

In the context of governance, a feeling of representation tends to be very important. When someone doesn’t feel they have a voice, they’re less apt to comply with the policies. While there will always be some who won’t budge unless the policy is what they want, many more are content with the policy direction, even if it is different from their own views, so long as they feel their concerns have been heard. Keep these things in mind when structuring your Center of Excellence/Competency Center, and hopefully it will help you find the right size for a successful effort.

Getting Started with SOA Governance

With the upcoming publication of my SOA Governance book, which will be shamelessly plugged on this blog, you’ll be seeing more posts on SOA Governance, whether they are teasers to the book’s content or complementary material.

Many of the discussions I’ve had with colleagues on the topic of SOA governance is how to get started. Everybody’s heard from all of the analysts, bloggers, and other pundits on the importance of governance, but they don’t have a clear plan on how to put it in place at their organization. This isn’t a surprise, because organizations at this points are now facing the need to change head-on.

My definition of governance is the people, policies, and processes an organization uses to achieve a desired behavior. If you’re not achieving the desired behavior, than change is needed.

It is at this early stage where the first breakdown can occur. All too often, the steps of articulating the desired behavior and the policies that will lead to that behavior are not done, or done insufficiently. Rather, the focus immediately jumps to enforcement. Not surprisingly, the people involved with governance at organizations in this situation make statements like, “The developers are going to do whatever they want, and we can’t stop them.” Strong enforcement may catch things before they go live, but it doesn’t address the behavior that did things the wrong way to begin with. While it may result in some behavior change the next time around, it is unlikely, because it did nothing to change the understanding of the project managers, architects, and developers on what they should be evaluating themselves against as the right thing to do. If change is needed, but you’re not stating what you want to change into, the behaviors are unlikely to change. This truth applies whether we’re talking about pre-project governance, project (design-time) governance, or run-time governance, although it’s most easily understood in the world of project governance.

All too often, architects and developers lament the fact that the only concern of the stakeholders is that the project is delivered on time and on budget. If this is an impediment to success with SOA, then that mentality needs to change. If it does not, you’re always going to have this tension between the project manager and the technical leadership on the project. There’s a ripple effect to this, however. The resistance to such a change is that, in general, IT has not been able to demonstrate how adding technical concerns into the success criteria has an impact in IT’s ability to deliver solutions in a timely manner. If you are able to change the success criteria to where projects can be delayed in order to address architectural concerns, you’d better be collecting metrics to show that things are improving over time. This again comes back to establishing your desired behavior and policies first. If you have these in place, now you have something to measure against. If you’re not achieving the desired behavior, it doesn’t mean that you need to scream louder. A change in policy, people, or process may be what is needed.

So, if you’re looking for a place to start, my recommendation is not to focus on enforcement. My recommendation is to define the behavior you’d like to see out of your organization, the policies that will help guide that behavior, and then focus first on education of the organization on those items. If your staff is better educated on the outcomes the organization wants to achieve, they’re more likely to comply with the policies that will lead to that behavior, lessening the need for strong enforcement.

The Use of Incentives

As usual, I had David Linthicum’s Real World SOA Podcast on during my drive into work today. I’m not going to pick on Dave today, however, as he was just the messenger. In fact, I liked that this week’s podcast mentioned the need for systemic change several times, which is a message that we need to continue to send out. See my last post for more on that subject. Anyway, Dave walked through this post from Joe McKendrick of ZDNet, entitled “Ten ways to tell it’s not SOA.” I hadn’t read this yet, but one item in particular stuck out when Dave mentioned it:

5) If developers and integrators are not being incented or persuaded to reuse services and interfaces, it’s not SOA. Without incentives or disincentives, they will keep building their own stuff.

The word in there that concerns me is “incented.” Many advocates for reuse also recommend some form of incentive program. Clearly, incentives are a possible tool to leverage for behavior change, but we’re much smarter than Pavlov’s dogs. Sometimes, people get too focused on the incentive, and not enough on the behavior. How many times do professional athletes put up big numbers in the last year of a contract because they’re “incented” by the free agent market in the upcoming off-season, only to then flop back down to their career .237 average after signing their multi-million dollar deal. Years ago, I was on a tiger team investigating what it would take to achieve reuse at our organization, and a co-worker would simply say, “Their incentive is that they get to keep their job.” Too often, incentives focus on one-time behaviors, rather than on changes that we want to become normal behavior.

As a very specific example on the risk associated with incentives, I’ve previously posted on how we need to provide context on the degree to which a service might be applicable in my horizontal and vertical thinking post. If you are a developer working in a “vertical” domain, the opportunity to write shared services simply doesn’t exist. Should that developer be penalized for not producing a reusable service? An incentive focused on writing shared services is meaningless for that developer. It’s like making a blanket statement to a baseball team that anyone who hits 20 home runs in a season will get an extra million dollars. You don’t want everyone swinging for the fences every at bat. Sometimes you need to lay down a sacrifice bunt. What about the pinch hitter who only get 100 at bats the whole season? Should they be unfairly penalized?

For me, there’s simply too a big of a risk of having incentives based on something that’s easy to quantify, rather than the actual desired behavior, and when applied to a broad audience, leaves some people out in the cold with no chance of getting the incentive through no fault of their own. Incentives are best used where a one-time change in behavior is needed due to extenuating circumstances, they should not be used to create behavior that should be the norm to begin with. As my colleague said, for normal behavior, your incentive is that you get to keep your job.

Governance and SOA Success

Michael Meehan, Editor-in-Chief of SearchSOA.com, posted a summary of a talk from Anne Thomas Manes of the Burton Group given at Burton’s Catalyst conference in late June. In it, Anne presented the findings of a survey that she did with colleague Chris Haddad on SOA adoption.

Michael stated:

Manes repeatedly returned to the issues of trust and culture. She placed the burden for creating that trust on the shoulders of the IT department. “You’re going to have to create some kind of culture shift,” she said. “And you know what? You’ve been breaking their hearts for so many years, it’s up to you to take the first step.”

I’m very glad that Anne used the term “culture shift,” because that’s exactly what it is. If there is no change in the way IT defines and builds solutions other than slapping a new technology on the same old stuff, we’re not going to even put a dent in the perceptions the rest of the organization has about IT, and are even at risk of making it worse.

The article went on to discuss Cigna Group Insurance and their success, after a previous failure. A new CIO emphasized the need for culture change, started with understanding the business. The speaker from Cigna, Chad Roberts, is quoted in Michael’s article as saying, “We had to be able to act and communicate like a business person.” He also said, “We stopped trying to build business cases for SOA, it wasn’t working. Instead use SOA to strengthen the existing business case.” I went back and re-read a previous post, that I thought made a similar point, but found that I wasn’t this clear. I think Chad nails it.

In a discussion about the article in the Yahoo SOA group, Anne followed up with a few additional nuggets of wisdom.

One thing I found really surprising was that the people from the
successful initiatives rarely talked about their infrastructure. I had
to explicitly solicit the information from them. From their
perspective, the technology was the least important aspect of their
initiative.

This is great to hear. While there are plenty of us out there that have stated again and again that SOA isn’t about applying WS-*/REST or buying an ESB, it still needs to be emphasized. A surprising comment, however, was this one:

They rarely talked about design-time governance — other
than improving their SDLC processes. They implemented governance via
better processes. Most of it was human-driven, although many use
repositories to manage artifacts and coordinate lifecycle. But again,
the governance effort was less important than the investment in social
capital.
I’m still committed to my assertion that governance is critical to a
successful SOA initiative–but only because governance is a means to
effect behavioral change. The true success factor is changing
behavior.

I think what we’re seeing here is the effects of governance becoming a marketing term. The telling statement is in Anne’s second paragraph- governance is a means to effect behavioral change. My definition of governance is the people, policies, and processes that an organization employs to achieve a desired behavior. It’s all about behavior change in my book. So, when the new Cigna CIO makes a mandate that IT will understand the business first and think about technology second, that’s a desired behavior. What are the policies that ensured this happened? I’m willing to bet that there were some significant changes to the way projects were initiated at Cigna as part of this. Were the policies that, if adhered to, would lead to a funded project documented and communicated? Did they educate first, and then only enforce where necessary? That sounds like governance to me, and guess what- it led to success!

Mentoring and Followup to Clarity of Purpose

James McGovern posted his own thoughts in response to my Clarity of Purpose post. In it, he asks a couple of questions of me.

“I wonder if Todd has observed that trust as a concept is fast declining.” I don’t know that I’d say it is declining, but I would definitely say that it is a key differentiator between well-functioning organizations and poorly functioning organizations. I think it’s natural that as an organization grows, you have to work harder to keep the trust in place. How many people in a small town say they trust their local government versus a big city, let alone the country? The same holds true for typical corporate IT. As James’ points out, trust gets eroded easily when things are over-promised and under-delivered. Specifically in the domain of enterprise architecture, we’re at particular risk because we often play the role of the salesperson, but the implementation is left to someone else. When things go bad, the customer directs their venom at the salesperson, rather than digging deep to understand root cause. We also too frequently look to point fingers rather than fix the problem. It’s unfortunate that too many organizations have a “heads must roll” approach which doesn’t allow people to make mistakes and learn. A single mistake is a learning opportunity. Making the same mistake over and over is a problem that must be dealt with.

“Maybe Todd can talk about his ideas around the importance of mentoring in a future blog entry as this is where EA collectively is weak and declining.” Personally, I think it’s a good practice to always have some amount of your enterprise architect’s time dedicated to project mentoring. Don’t assign them as a member of the project team where the project manager controls their tasks, rather, encourage them to actively work with the project team, keep up to date on what they are doing, and look for opportunities where you can help. The most important thing, however, is to have an attitude of contributing the help that is needed, rather than contributing your own wisdom. If you come in pontificating, going off on tangents, and expressing an “I know better” attitude, you’ll only get resentment. If, instead, you seek first to understand, as Stephen Covey suggests, you’ll have much better luck. While I was working as a consultant, I had a client who indicated that what they really needed was a mentor. For some consultants, this would have been perceived as the kiss of death, because it can result in an open-ended, warm body engagement, without clear expectations and deliverables. There’s a lot of risk when expectations aren’t clear and can change on a moment’s notice. In reality, the engagement was simply to listen and then offer suggestions and advice to either confirm what they already knew but lacked confidence to go after with conviction, or to suggest things that they might not have thought about. It’s not an easy task to do, but it is absolutely critical. I think an architect who is willing to stand by his or her strategy and see it through to completion, not necessarily from a hands-on perspective, but from a mentoring and guidance perspective, can build far more trust.

Comments on TUCON 2008 Podcast

Dana Gardner moderated a panel discussion at Tibco’s User Conference (TUCON) on Service Performance Management and SOA. There were some great nuggets in this session, I encourage you to listen to the podcast or read the transcript. The panelists were Sandy Rogers of IDC, Joe McKendrick, Anthony Abbattista of Allstate, and Rourke McNamara of TIBCO.

First, Sandy Rogers of IDC commented that what she finds interesting “is that even if you have one service that you have deployed, you need to have as much information as possible around how it is being used and how the trending is happening regarding the up-tick in the consumption of the service across different applications, across different processes.” I couldn’t agree more on this item. I have seen first hand the value in collecting this information and making it available. Unfortunately, all too often, the need for this is missed when people are looking for funding. Funding is focused on building the service and getting it out the door on-time and on-budget, and operation concerns are left to classic up/down monitoring that never leaves the walls of IT operations. We need to adjust the culture so that monitoring of the usage is a key part of the project success. How can we make any statements on the value of a service, or any IT solution for that matter, if we aren’t monitoring how that service is being used? For example, I frequently see projects that are proposed to make some manual process more efficient. If that’s the value play, are we currently measuring the cost of the manual activity, and how are we quantifying the cost of doing it the new way? Looking at the end database probably isn’t good enough, because that only shows the end results of processing, not the pace of processing. Automated a process enables you to process more, but if demand is stable, the end result will still look the same. The difference lies in the fact that people (and systems) have more time available for other activities.

Sandy went on to state:

They (organizations) need a lot more visibility and an understanding of the strains that are happening on the system, and they need to really build up a level of trust. Once they can add on to the amount of individuals that have that visibility, that trust starts to develop, more reuse starts to happen, and it starts to take off.

Joe picked on this stating “that the foundation of SOA is trust.” No arguments here. If the culture of the organization is one of distrust, I see them of having very slim chances of having any success with SOA. Joe correctly called out that a lot of this hinges on governance. I personally believe that governance is how an organization changes behavior and culture. Lack of trust is a behavior and trust issue. Only by clearly stating what the desired behavior is and establishing policies that create that behavior can culture change happen.

Anthony provided a great anecdote from the roll-out of their ESB stating that they spent 18 months justifying its use and dealing with every outage starting with someone saying, “TIBCO is down.” In reality, it was usually some back end service or component being down, but since the TIBCO ESB was the new thing, everyone blamed it. By having great measurements and monitoring, they were able to get to root cause. I had the exact same situation at a prior company, and it was fun watching the shift as people blamed the new infrastructure, and I would say, “No, it’s up, and the metrics it has collected makes me think the problem is here.”

A bit later in the podcast, Joe mentioned a conversation with Rourke earlier in the day, commenting that “predictive analytics, which is a subset of business intelligence (BI), is now moving into the systems management space.” This sounds very familiar…

Rourke also made a great comment when referring to a customer who said “their biggest fear is that their SOA initiative will be a victim of its own success.” He went on to say:

That could make SOA a victim of its own success. They will have successfully sold the service, had it reused over and over and over and over again. But, then, because of that reuse, because they were successful in achieving the SOA dream, they now are going to suffer. All that business users will see from that is that “SOA is bad,” it makes my applications more fragile, it makes my applications slow down because so many people are using the same stuff.

That was a great point. SOA, if it is successful, should result in an increase in the number of dependencies associated with an IT solution. Many people shudder at that statement, but the important thing is that there should be those dependencies. What’s bad is when those dependencies aren’t effectively managed and monitored. The lack of effective management results in complicated, ad hoc processes that give the perceive that the technology landscape is overly complex.

This was one of the better panel discussion I’ve heard in a while. I encourage you to give it a listen.

Gartner EA: The Management Nexus

Presenters: Anne Lapkin and Colleen Young

One thing all of the presenters in the EA Summit are very good at doing is using consistent diagrams across all of their presentations. This is at least the third presentation where I’ve seen this flow diagram showing linkage between business goals and strategy, and business planning and execution. Unfortunately, Anne points out that the linkage is where things typically break down.

Colleen is now discussing strategic integration, which begins with an actionable articulation of business strategy, goals and objectives. From there, she recommends a standardized, integrated, results-based management methodology. As a result, she claims that we will see exponentially greater benefits from enterprise capabilities and investments.

Anne is speaking again and emphasizing that we need a unified contextual view. This consists of a goal, which is one level deeper than the “grow revenues by XY%” which includes a future end state with a timeline and measurable targets, principles that establish the desired behavior and core values, and relationships.

Colleen now has a great slide up called, “The Implication of ‘Implications’.” The tag line says it all- “Unclear implications lead to inconsistent assumptions and independent response strategies that inevitably clash.” Implications that must be investigated include financial implications, business process implications, architecture implications, cultural change implications, and more. All parties involved must understand and agree on these implications.

A statement Colleen just made that resonates with my current thinking is, “Based upon these implications, what do I need to change?” All too often, we don’t stop to think about what the “change” really is. Work starts happening, but no one really has a clear idea of why we’re doing it, only an innate trust that the work is necessary and valuable. If the earlier planning activities have made these goals explicit, the execution should be smoother, and when bumps in the road are encountered, the principles are right there to guide the decision making process, rather than on relying on someone’s interpretation of an undocumented implication.

Once again, this was a good session. I know I’ve commented on a few sessions that they could have been a bit more pragmatic or actionable, this one definitely achieved that goal. I think the attendees will be able to leave with some concrete guidance that they can turn around and use in their organizations.

Gartner EA: Effective IT Planning

Presenter: Robert Handler

He’s showing a slide on IT Portfolio Management Theory, and how there is a discovery phase, a project phase, and an asset management phase. Discovery explores new technology, projects implement new technology, and the asset management phase operates it once it’s in production. Next, he’s shown an Enterprise Architecture diagram and discusses the whole current state/future state approach, risk tolerance principles, etc. He now has a slide with a summary of three areas: IT Strategic Planning, Project & Portfolio Management, and Enterprise Architecture. He shows that all three of these have overlapping goals and efforts that could be better aligned because at present, they tend to exist in vacuums.

He’s now talking about some of the issues with each of these disciplines. First, he used Wikipedia’s definition of technology strategy to show the challenge there (Wikipedia claims it’s a document created by the CIO, the audience chuckled at that). On to Project and Portfolio Management, he’s calling out that only 65% of organizations cover the entire enterprise in their portfolio, and most PPMs are focused on prioritizing projects. On the EA side, he calls out that most efforts are mired in the creation of technical standards.

His recommendations for creating a win/win situation are:

  1. Collectively maintain, share, and use business context.
  2. Use EA to validate strategic planning and improve portfolio management decisions.
  3. Use portfolio management to generate updates against IT strategy and EA design and plans.

He proceeded to go into detail on each of these. Overall, I think this was a good 100-level presentation to tell an audience of EA’s that they can’t ignore IT strategy efforts and PPM efforts. They need to be aligned with them. It could have been a bit more pragmatic to emphasize how one would go about doing this.

Gartner EA: Michael Raynor

Presenter: Michael Raynor, Deloitte Consulting

This session is from Michael Raynor, author of “The Strategy Paradox.” The title is “The Accidental Strategist: Why uncertainty makes EA central to strategy.” He feels that it is ironic that there is a separation between the formulation of strategy and the implementation of strategy. He doesn’t agree with this approach. He feels that formulation and implementation should be a more interactive process and less linear, minimizing the strategic risk that an organization takes.

On this slide, his observation is that strategic uncertainty has been ignored. He used an example of the search engine competition of years ago and how, at least in part, Google won the space by making the best guess with regards to their strategy. It’s not that AltaVista made poor choices, they simply guessed wrong with what would be the most important factors in that marketplace. There is uncertainty associated with strategy.

An interesting anecdote he’s showing us now is that organizations that have a high commitment to strategy, which often times are the companies that we try to emulate, have an extremely high chance of failure, while companies with a relatively low commitment to strategy have a very low failure rate. To me, this seems be an example of low risk/low return and high risk/high return.

Extreme positions help customers know what to expect. Companies that are in the middle, “wander around like a stumbling drunk.” His example of the continuum was Wal-Mart at one end (cost differentiation, if given a choice of make it better or make it cheaper, Wal-Mart makes it cheaper), Nordstrom at the other end (product differentiation, Nordstrom makes it better), and Sears in the middle. Margins are best at the extremes and squeezed in the middle, yet most companies are in the middle. The reason is that at the extremes, it’s a winner take all approach. K-Mart can’t compete with Wal-Mart, Lord & Taylor couldn’t compete with Wal-Mart, yet Sears and JcPenney can both co-exist just fine. The reason for this is that companies in the middle have chosen to minimize their strategic risks. Companies at the extremes take on more risk in their strategic choices.

He’s now discussing Microsoft. He’s explaining the Microsoft manages strategic risk through their portfolio and understanding that things will change over time. This is different than diversification, where the profits of one division would cover losses in another. This is where if what’s important to revenue changes, the company is positioned to quickly leverage it. For example, if consolidation of computing in the home is centered at the gaming console rather than at the PC, Microsoft has XBox.

Another good example he’s presenting is Johnson & Johnson. Their Ethicon & Endo-Surgery division sells colonoscope, an area that previously differentiated on the technical excellence of the product. For growth, however, the problem was enough people weren’t getting colonoscopies. In the US & Canada, a colonoscopy is a sedated procedure, which greatly increases the cost associated with it. In order to manage the strategic risk that selling colonoscopes may switch and become a pain management rather than technical excellence issue, Johnson & Johnson’s VC arm invested in a company that was advancing sedation technologies. (Hopefully, I got this recap right…)

The metaphor that he believes captures how to manage strategic risk is not evolution, but gene therapy. That is, if the environment changes in certain ways, the genes can be recombined in new ways to leverage that environment appropriately. Good talk!

Gartner AADI: Measuring the Value of SOA

I just finished my panel discussion with Mel Greer and Mike Kavis on measuring the value of SOA. I think we all had hoped that there would be more attendees, but hopefully those that chose to attend got something out of it. My main message was measure, measure, measure. I think it’s difficult to put a direct value on SOA adoption, that is, one where you can say the value was directly as a result of SOA efforts, but it’s not difficult to put a contributory value on SOA adoption. In other words, we need to measure the way IT is contributing to the success of the company as a whole, and as part of that, we can see some before and after measurements to see the impact of SOA and any other changes. The two things that I brought up in answering questions that I thought I’d share here are:

  • Instrument your services now. Part of the problem with measuring things today is that we haven’t instrumented things in the past. These days, value is almost always expressed in relative terms, such as “relative to what we’re doing now.” If you’re not collecting metrics, you can’t say what “now” is, though. Once again, we’re at one of those unique opportunities where the door is open to do things differently. Put the instrumentation in now, before you have a portfolio of 100+ services that have no instrumentation.
  • Measuring puts the spotlight on you, but will always enable you to answer questions better than before. A member of the audience asked the question, “What happens if your measurements show that you’re not achieving your goals?” This was a great question. Unfortunately, sometimes by the mere act of measuring things, people will immediately put the blame on you when things aren’t achieving the desired benefits, simply because you’re the one thing that can concretely demonstrate contribution (or lack thereof). My answer to this was two-fold. First, it was to try to make sure you have the backing metrics to allow proper root cause analysis. If you just focus on one metric, and nothing else, it makes root cause identification very difficult, and it puts the spotlight at the one area when you’re measuring. This puts strategic initiatives like SOA at risk, because people will think the whole thing is flawed, when in fact, the lack of results may have nothing at all to do with SOA adoption. Second, I talked about the appropriate spin to put on it, this being the political season in the US. When something doesn’t work out as planned, the way to spin the metrics is to show that we’re in a better spot to fix the problem because of the measurements than we would have been before.

The final thing I wanted to call out was a reference to a blog I posted yesterday at the request of Rob Eamon. Someone asked a question about how to get the stated goals from “the business” and the role of IT in contributing ways of measuring it. I called out that IT is part of the business, so there’s no reason that IT can’t contribute to the definition and appropriate ways to measure the business goals. Rather than viewing it as an “extraction” effort, it should be a joint effort with all members of the business, which includes IT.

If you attended the session, please feel free to post any comments or questions here. I hope it was valuable.

Gartner AADI: Application Strategies

Presenter: Andy Kyte

The title of this session is, “If You Had an Application Strategy, What Would It Look Like?” So far, I think I’m going to like it, because he’s emphasizing the need to manage the application lifecycle. That’s application lifecycle, not application development lifecycle. The application lifecycle ends when the last version of the application is removed from production. He’s emphasizing that an application is both an asset and a liability, with the liability being all of the people, technologies, and skills required to sustain it.

He’s now getting a ton of laughs by using a puppy/dog metaphor. He stated that we don’t buy a puppy, we buy a dog. It may be all cute and playful when we get it, but it will grow in a dog that sheds, eats, etc. Applications are the same way. Great quote just now: “Business cases are focused on putting applications in, and not on what to do after it. We are contributed to the problem by not addressing this.” He emphasizes that an application strategy should cover the next seven years, or half the expected remaining life of the application, whichever is the greater.

He’s now talking about stakeholder management and hitting on all the points I usually mention when talking about Service Lifecycle Management. The application is an asset, it must have an individual who is responsible for it, lifecycle decisions should be transparent, and all stakeholders (i.e. users of the application) must be identified and actively encouraged to play some role in the governance of the application.

The rest of the session is focusing in on the creation of the strategy document and making sure that it is a living, useful document. He’s emphasized that it is a plan, but also stressed the first law of planning: “No plan survives contact with the enemy.” It’s recognized that the plan is based on assumptions about the future. By documenting those assumptions, including leading indicators, leading contra-indicators, and expected timings, we can continually monitor and change the plan.

Overall, this is a very, very good session. What’s great about this is it is promoting a genuine change in thinking in the way that probably 90% of the companies here operate. Add a great speaker to the mix, and you’ve got a very good talk.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.