Archive for December, 2007

Happy Holidays

To all my readers, have a Merry Christmas and Happy New Year!

2007 Predictions Results

Last year, like many other pundits, I decided to have a go at some predictions for 2007. Let’s revisit them.

  • Vendors: Surprise, surprise, the consolidation will continue. There aren’t too many niche SOA vendors left, and by the end of 2007 there will be fewer.
    Okay, so this was a no-brainer, but the devil is in the details. We had a very big deal in the SOA space between SoftwareAG and webMethods, and the deal that hasn’t happened between Oracle and BEA. There really wasn’t much that happened with the smaller SOA vendors (AmberPoint continues to chug along in their narrow niche), so maybe that category has already reached its threshold. There were far more dealings outside of the SOA space, clearly around business intelligence.
  • Operational Management: At least one of the major Enterprise Systems Management providers will actually come out with a decent marketing effort around SOA management along with a product to back it up…
    My wishful thinking didn’t pan out. I’m still astonished at the lack of activity around this from the big players in Enterprise Systems Management. While I understand that management software is an extremely difficult sell, you’d think that a connection could be made into the visibility provided by an SOA management system and the growing interest and importance in business intelligence. Overall, the marketing message is still louder from companies like AmberPoint, SOA Software, and Progress Actional than from the big players.
  • Registry/Repository: At least one players in the CMDB space will enter into the Registry/Repository space, most likely through acquisition. Thereís simply too much overlap for this not to occur.
    Once again, this didn’t pan out, however, in my conversations this past year I ran into far more people who now understand the relationship between CMDB and the SOA Registry/Repository, although I think some of the lower level marketing and sales people at the conferences need to educate themselves some more on this, because many of them didn’t get it. At best, they only understood a basic need to grab some list of services via UDDI from a registry.
  • CMDB: At least one of the super-platform vendors will see the overlap between CMDB and Registry/Repository and begin to incorporate it into their offerings, either through partnership, or through an acquisition …
    I won’t be as harsh on myself on this one. While none of the super-platform vendors have done what I’ve said, the majority of them have begun to show the importance of the metadata repository in their overall infrastructure ecosystem. Whether it is Oslo, IBM’s WSRR, BEA’s ALRR, SoftwareAG’s CentraSite, or any of the others, expect to see more on this one.
  • Events: … I think weíll see renewed interest in event description, as I see it as a critical tool for enabling agility. Services provide the functional aspect, but there needs to be a library of triggers for those functions. Those triggers are events. Along with this, the players in the Registry/Repository space will begin to market their ability to provide an event library just as they can provide a service library.
    I should have re-read my previous postings on the slow uptake of CEP and other event technologies, even though it stirred the pot with some CEP vendors. I haven’t seen much of anything on the event description front or any discussion of event libraries from the SOA Registry/Repository space.
  • Service Development and Hosting: … While itís too early to proclaim the application server dead in 2007, I think the pendulum will begin to swing away from flexibility (i.e. general purpose servers that host things written in Java or C#) and toward specialization. A specialized service container for orchestration can provide a development environment targeted for orchestration with an execution engine targeted for that purpose …
    Thanks to Microsoft and Oslo, I think I can claim a winner on this one. The pace of adoption may be slower than the wording in my prediction, but I do think it’s safe to say that more companies are trying to leverage model-driven BPM-based technologies for development than last year. The key question is whether they’re seeing the success they desire or not. A subject for another blog entry is the current reality of model-driven development with today’s BPM tools.

Overall, I think my predictions are about typical of my thinking. I freely admit that I’m a forward thinker, and as a result, I’m usually guilty of thinking things will happen faster than they actually do. I haven’t decided whether or not to make 2008 predictions yet, and given that the bulk of my 2007 predictions are still off in the future, I don’t want to do a rehash of the same thing. We’ll see. If I feel inspired after seeing others publish their own predictions, perhaps I’ll do it again. One prediction I didn’t make was how many blog posts I’d have, and I was surprised to see that my 2007 predictions was post 103, and this post will be number 345. While it’s not as easy to find time to post now that I’m back in the corporate world, hopefully you found some of those 200+ posts enlightening and you’ll do the same in 2008. I’ve seen a pretty significant uptick in the number of subscribers over the year from FeedBurner, so I’m thankful for all of you continuing to read and sharing the links with others.

The Return of the ESB

Brain.jpg

Just when you thought it was safe to build your SOA, the ESB has returned. A whitepaper from Paul Fremantle of WSO2 seems to have stirred the ESB pot once again, with a number of pundits chiming in on the future of the ESB. I was surprised at how little conversation there was about at the Gartner AADI Summit in comparison to the previous Gartner event I had attended two years prior, but that’s quickly changed.

I read Paul’s paper, and from my perspective, not much has changed from the views that I expressed two years ago back in a Burton Group Catalyst presentation and then in this followup blog post. The bulleted list of capabilities provided in that post aren’t ones that are going to go away, and it surprises me that there continues to be so much debate in this space.

Joe McKendrick recently summarized a number of recent comments in a blog entry, including ones from Roy Schulte of Gartner, Lief Davidson of IBM, ZDNet blogger Dana Gardner, and Lorraine Lawson of IT Business Edge (who has given a couple of my recent blogs some publicity, thanks for that). There were a couple things in Joe’s entry that just rubbed me the wrong way.

First is this notion of federated ESBs. Both Roy and Dana made comments around this, and I simply don’t agree. I’ll admit, there are a certain class of organizations that will have multiple ESBs. They’re the same organizations that have one of just about every technology because they’re so huge. Take them out of the equation, because I don’t believe they represent the masses. For the rest of us, what does “federated ESBs” mean? Does it mean that I have multiple ESBs performing redundant capabilities? Or does it mean that I partition up the capabilities across multiple products/devices, but yet with no two products providing the same capability, even though they may be capable of it? The latter is tolerable and likely, the former is not. For example, Roy mentioned security policies and quality of service. I may rely on an XML appliance for service security at the perimeter, and then rely on app server capabilities or some security agent within the data center. For simplicity sake, if we define quality of service as some form of intelligent routing and traffic shaping, I may rely on an ESB, a high-end XML appliance or network device, or advanced clustering capabilities of an app server. This type of partitioning makes sense. What doesn’t make sense is having one ESB (pick your favorite) providing security, QoS, etc. for services by development group A and having another ESB providing the same for services from development group B. To put it in context, routing is a core capability according to my definitions. Would you let development group A put a Cisco router in front of their services and development group B put a Juniper router in front of their services? You certainly wouldn’t do this if all of the services are hosted in the same data center. Yes, if your company has grown through acquisition and has lines of business all over the place, you may have different routers, but now we’re getting back to those super-large conglomerates I mentioned at the beginning. For those of you now envisioning the “big honking ESB,” don’t think that way. Think of it more like the network. I may have many deployments of a single ESB product, each handling a portion of the services in the enterprise, and their consumers know to direct to the appropriate ESB from design time rather than some uber-ESB. Applications point at specific databases or hostnames, there’s no reason that services can’t work the same way. Again, I don’t view this as federation.

What I believe we need to strive for in this space is centralized policy management and distributed enforcement, with standards based communication between the policy manager and the enforcement points (if only those standards existed). For example, one could make the argument that SAP could provide a second ESB that deals with services provided by the SAP infrastructure. I don’t view this as federation, however. From my perspective, I don’t even care whether SAP has an ESB or not. What I do care about is whether the entry points exposed by SAP can enforcement my QoS policies, my security policies, my versioning policies, etc. Let me centrally manage the policies, push them out, and have it just work. The challenge with this is not the enforcement architecture and federation there, but rather the metadata repository that will hold all of this policy information. This is where federation is important, because I may be getting third party products that come with their own pre-populated registry/repository of services that I need to manage.

The second statement that I disagreed with was Paul’s comment that ESBs discourage the shared ownership of services. If it does, then I think ESB ownership is the problem. Most web applications require the configuration of a load balancing farm before they can be accessed. No one considers the keeper of the farms in network operations the “owner” of those web applications, so why would the team responsible for the configuration of the ESB be considered the owner of services? Personally, I think a lot of this stems from ESBs being targeted at developers, rather than at operations teams. As I’ve commented in the past, I think the inclusion of service development capabilities like orchestration and the leftovers from the EAI space messed up the paradigm and caused confusion. Keep mediation separate from development.

So, as I climb down from my soap box, what are my parting words? I still believe that the capabilities in my post from last year are necessary, and an ESB is one way of providing those capabilities. Intermediaries are almost always used in web architectures, so there shouldn’t be such a strong aversion to having them as a front end to a service. That being said, too many intermediaries is a bad thing, because we haven’t gotten to the single pane of glass management that I think is necessary. Rather, each intermediary has its own management console, and the chances of something getting missed or fat-fingered goes up. Focus on an approach for centrally managing the policies, minimizing the number of places where policies are enforced, and keeping operational activities separated from service development activities.

Analytics, Reporting, and working with the power Excel user

Mike Kavis asks the question, “Why are you still generating reports?” In his blog, he states that “we should empower the users to create their own reports.” This brings up an interesting discussion. Anyone who has worked in IT for a few years know that there’s a significant amount of work that is done using Excel. Excel is the empowered user’s tool of choice. Is this a good thing or a bad thing? On the one hand, individual users are empowerd. On the other hand, is the analytics being performed of value to more than just that user? Could someone with a computational background perform those same analytics in a much more efficient manner?

The statement that Mike makes for which there is absolutely no argument against was this one:

When users ask for a report, the business analyst must ask the user, “What problem are you trying to solve?”

Ultimately, IT and the power user should be collaborating on what the best solution is. The user may have a need for ad hoc analytics, but there should also be a way that those analytics come back into the fold and are available for broader use. If the power user wants IT to simply get out of the way or IT simply wants to maintain tight controls on information, that’s a sign of an unhealthy relationship. Rather, both parties should be concerned with leveraging each other’s strengths to the fullest extent to create the best solution.

James Taylor (not the singer) posted a followup to Mike’s blog that makes similar points. He also suggests that we dig into the reasons behind the request for information. Given his focus on enterprise decision management, he suggests focusing on the decision that the person was trying to make, rather than on the information used to support that decision. The key similarity between my message, Mike’s, and James’ is that we need to go beyond the initial request and understand the purpose behind it. Odds are that there’s something else that IT can be helping out with.

The Need for Reference Material

James McGovern called out that he hasn’t seen much discussion on the topic of reference architectures, and called me out for my thoughts. I’m never one to pass up a good blog topic, especially when I don’t have to come up with on my own.

First, some background on my experience. My current job responsibilities include the development of reference architectures, my engagements with clients while I was a consultant all included the development of either a deliverable that included “reference architecture” in the title, or was clearly some form of reference material, and my job prior to consulting, included the development of reference architectures within the last 12 months of my time there. So, I’m no stranger to this space.

Reference architectures, and reference materials (since the need doesn’t stop at architecture) in general are an interesting beast. Personally, I view them as part of the overall governance process, mainly because they’re created to document a desirable pattern or approach that the authors would like (or will ensure that) others follow. At the same time, a document alone does not create governance, just as buying an SOA Registry/Repository doesn’t create SOA governance. Reference materials are a tool in the arsenal and the degree to which they are used is dependent on how you architects work with the end consumer of the reference material. Organizations are all over the spectrum on this. Some architects live in an ivory tower with virtually no interaction with teams on active projects, some architects are the exact opposite, with their time completely consumed by day-to-day project activities. Most organizations fall somewhere in between.

My opinion is that reference material is absolutely necessary, if nothing else but to prevent the organization from tribal operations. If none of the standards and guidelines ever get written down, and decisions are solely based on tribal knowlege, the organization can quickly break down into the haves and the have-nots. If you’re part of the tribe, you have the knowledge. If you’re not, all you can do is make your best guess until you have to show up to tribal council and get lambasted. Trying to gain the knowledge from the outside is a very difficult process.

The next question, however, is what information belongs in the reference material? Does it do any good to document something in the reference architecture that everyone should already know, or should you assume that no one knows anything, and document it all? The problem is that EA has limited resources, just like everyone else, so you have to give consideration to the bang for the buck in the reference material. Once again, what’s “right” is very dependent on the end consumer of the material (which is why having a consumer focus is important). If you have an organization of seasoned Java programmers, how much reference material is needed on developing good web applications? If you have an organization with lots of VB6 and COBOL developers, they may need lots of reference material on web applications. So, know your audience, and make sure that the reference material is relevant and valuable for them.

Internal Audit and Enterprise Architecture

I had the opportunity recently to learn more about the role of internal audit in an organization. It was a very interesting and educational experience, and got me thinking a lot about the relationship between the two.

What’s the visual that comes to mind when we hear the word audit? People in the USA probably think of an audit by the Internal Revenue Service. They would also rather go to the dentist and have a root canal done without anestesthia than be audited. So, you can certainly argue that the internal auditors have their work cut out for them. The presenter pointed out, however, that the role of internal audit is changing with time. While a few years ago, they may have been viewed as a reactive police force, today, there’s a shift toward a proactive consulting organization. Rather than coming in after the fact and telling organizations whether they’re compliant or not, they’re now being asked at the beginning, “What do we need to do to make sure we’re compliant?”

There are strong parallels to what goes on in the world of enterprise architecture. First off, many organizations have the dreaded architectural review board, the reactive police force of architectural governance. Projects teams dread them. Somehow, we need to move from this model to the latter model where projects teams know they need to be architecturally compliant and are actively seeking out the input of enterprise architecture to ensure this is the case from the beginning.

Unfortunately, the challenge for Enterprise Architecture is that there is no corporate mandate for EA in the same way that there is for Internal Audit. While I personally thought David Linthicum’s posts on EA as a corporate responsibility were a big stretch, you could certainly argue that if enterprise architecture was a corporate responsibility in the same way as Sarbanes-Oxley, then there would be no debate on whether an organization needed Enterprise Architecture. I found it very sad that at the Gartner EA Summit closing session, when Gartner posted a predication that 40% of EA programs will be stopped by 2012, about 40% of the audience agreed. Note that prediction didn’t say “changed” or “restarted,” it said “stopped.” A publicly listed organization on the NYSE can’t stop the Internal Audit program, it is required.

Overall, my takeaway from this session was that EA and Internal Audit need to be best friends. If Internal Audit has an IT audit group, which most do, it needs to be working closely with the EA group, as both are providing governance. In one of my panel discussions at the Gartner event, I made the comment that EA is certainly about governance. It could be argued that EA activities are basically centered around two major activities: strategic planning and governance. While Internal Audit probably has less of a role in strategic planning (except where governance issues are necesseary), clearly, there’s significant overlap in the governance function. Determine how both groups can work together to ensure that projects aren’t bombarded with governance from multiple groups. The view of the governed is already very negative, we need to do what we can to change that view.

SOA Design Patterns

James Urquhart brought to my attention the public review of SOA Patterns, as authored in the forthcoming book, “SOA Design Patterns,” by Thomas Erl. You can see the press release from Prentice-Hall here.

My first reaction when I received the email, prior to visiting the SOAPatterns.org site was one of skepticism. While I think patterns can add a lot of value, the immediate problem I saw stems from the fact that I’m very much a believer in business-driven SOA. In order to reach a broader audience across multiple verticals, you have to be more business agnostic. As we get more business agnostic, we naturally move deeper into the technology stack, and things at that level of granularity may not be the best service candidates, although they may be great candidates for reusable frameworks. If we’re talking about patterns inside of the service implementation, then we’re really talking about general design patterns, building on the original work of the Gang of Four, not really SOA Design Patterns.

So, with my bias set, I visited the web site. The first thing I hoped to see was some classification by business industries, such as “SOA Patterns for Insurance” or “SOA Patterns for Health Care” but I didn’t find them. Bummer, but I also didn’t expect this. Something like that would be of significant value as intellectual property to a consulting firm, and I think they’d make a lot more money keeping it to themselves and leveraging it on their engagements. What was on the site was four chapters: Basic Service Inventory Design Pattern Language, Architectural Design Patterns, Basic Service Design Pattern Language, and Service Design Patterns.< ?)>

In looking at the first chapter, Basic Service Inventory Design Pattern Language, my first reaction was again one of skepticism. The first page begins with “Inventory Context Design Patterns,” “Inventory Boundary Design Patterns,” “Inventory Structure Design Patters,” and “Inventory Standardization Design Patterns.” It also introducted a phrase- “service inventory architecture” -which I had never heard before. Looking at this page, nothing was making a connection with me. As I drilled into each section, I did find some goodness, but it could be argued that what is really being presented in this section is really a description of a step in a methodology, rather than a pattern. For example, one pattern listed is the “Enterprise Inventory” pattern, which lists the problem as:

Delivering services independently via different project teams across an enterprise establishes a constant risk of producing inconsistent service and architecture implementations, compromising recomposition opportunities.

The solution is:

Services for multiple solutions can be designed for delivery within a standardized, enterprise-wide inventory architecture wherein they can be freely and repeatedly recomposed.

This doesn’t feel like a “pattern” to me, but it’s certainly something that should be done. I don’t think anyone would argue that having an enterprise service inventory is a bad thing. Another pattern I looked at was the “Vendor-Agnostic Context” pattern. Again, what was presented in the “pattern” was goodness, however, it felt more like a principle rather than a pattern. This particular one did do a good job in demonstrating how this principle does lead to the use of specific techniques, such as leveraging the “Canonical Protocol” and “Decoupled Contract” patterns.

Overall, what did I think? Well, it certainly didn’t meet my original hope of seeing industry-specific business patterns. By that, I mean I didn’t find something that said “Order to Cash” with guidance on the types of services that should make up that process. I didn’t find something that said, “here are services that all organizations with an HR department should have.” Nothing business-specific whatsoever. Of course, I didn’t expect to see this, it’s just what I was hoping to see, just as I hoped the (defunct?) SOA Blueprints effort from OASIS a couple years ago might produce something along these lines.

Putting that bias aside, the more I dug into the site, the more I found things that provided good guidance, even though I’d say the use of the “pattern” moniker was a bit liberal. If you want to get an idea of what principles and factors should be considered in creating good services, versus just slapping WSDL or XML schemas in front of some existing logic, there’s a lot of good material here that is freely available. While some of the earlier pages read too much like a college textbook, a couple drilldowns brought me to things that were applicable to my daily work and made sense. So, based on that, I would recommend that people at least visit the patterns site, drill down into it, and see what nuggets you can leverage in providence guidance to your service development teams. If you really like it, then perhaps Thomas’ book can become part of your standard library for your developers.

Encouraging Culture Change

In a comment on my “EA and SOA Case Panel” entry, Surekha Durvasula asked me a couple questions. They didn’t come up in the panel discussion, so I thought I’d respond in a separate entry, as the topic should be of interest to many of my readers. She wrote:

Is “reuse” of a business service considered a valuable metric? How does governance influence the “reusability metric”? Did this come up during this SOA panel?

Specifically, I am wondering if service governance has any bearing in terms of not only promoting the usage of a service but also in ensuring that the enhancement of a service is in keeping with the enterprise-worthiness of the service. Often times it is the evolution of the service where cross-domain applicability is sacrificed.

Also, is there a trend in the industry in terms of promoting business service usage via the use of a “rewards program” or in tying it to compensation packages? Have some industries reached a level of maturity in terms of service reuse especially in those industry verticals that are hit with global competition forcing them to reduce overall operations costs and/or to offer novel product offerings?

Let’s take these one at time. On the subject of reuse, I absolutely think that number of consumers is a valuable metric. At the same time, when dealing with reuse, one must be cautious that it isn’t the only metric of interest. I’ve been in meetings with individuals that have made comments like, “If a service isn’t reused, why are you making it a service in the first place?” I strongly disagree with statements like this, as do most pundits in the SOA space. To defend this position, I frequently quote the oft-referenced Credit Suisse SOA efforts, where they stated that their average number of consumers per service was 1.5. This means that there will be many services that aren’t reused, and probably some that are used by many consumers. While reuse is important, we also have to be looking at metrics for agility, which loosely stated, is the ability to respond to business change. This will involve tracking the time it takes to develop solutions. The theory is that by breaking a solution apart into autonomous services, I reduce the number of touch points when the business needs change. In reality, it depends on the type of change. For example, most of us would agree that a separation of presentation logic from the core business processing is a good thing. That being said, there certainly are plenty of changes that will require touching both the presentation logic and the business logic. One of the most difficult parts of SOA is knowing where to draw service boundaries, because the rules are always changing.

Back to the subject- if we have reusable services, what role does governance play in ensuring that the service doesn’t fork into a bunch of one-off, consumer-specific variants? This is a very interesting question, one that I hadn’t thought much about in the past. My gut is telling me that the burden for this belongs with the service manager, not with a governance team. That’s not to say that there shouldn’t be any involvement from the governance group, but I see a much stronger role from governance in establishing the original service boundaries and assigning service ownership. For future versions, the service manager must be the one that can recognize when the service is getting outside of the boundaries that were originally intended, and this will happen. In some cases, boundaries may need to be redefined. In other cases, we may need to push back on the consumers. All of this starts with that service manager. The service manager must balance the needs of the consumer against the cost of service management. Measurements for determining that manager’s performance should include the number of versions currently being managed and the time required to respond to consumer requests. It is then in their best interests to keep the service focused on its original purpose.

Finally, regarding “rewards programs” or incentives, I don’t know that I’ve ever heard of a case study centered around reuse that didn’t involve incentives. SOA is about culture change, and it’s extremely difficult to change culture without incentives. One only need to look at a government to understand how change occurs. No one would be happy if the federal government mandated that all cars sold starting in 2008 had to get 50 mpg or higher. This is the “big stick” approach. I’ve got a big stick and you’ll get whacked with it if you don’t comply. In terms of IT incentives, one manager I worked with summed up the “big stick” approach well, “Your incentive is that you’ll keep your job.” More typically, the government takes a “carrot” approach, at least at the beginning. Tax breaks are granted to companies that produce high mpg vehicles and to consumers that buy them. These incentives may not even cover the added cost of that approach (e.g. does a $500 tax break for 4 years justify spending $3000 more on a vehicle?), but just the fact that they exist can often be enough to encourage the behavior. Only when enough momentum has gathered does the stick come out, essentially stating a policy that is what the majority of the people are doing already. Overall, I think that incentives should be viewed as a short-term tool to get momentum behind the change, but should always be planned for phase-out once the desired behavior is achieved. Have we reached that point with SOA? I’ve yet to see a company that has. Have we reached that point with reusable libraries? Partially. Most developers would not build their own low-level frameworks today. The problem, however, is that multiple frameworks exist, and there’s still strong resistance in many organizations to having a single solution coming from a frameworks team. I heard my first talk on reuse back in 1998, so it’s very clear that widespread culture change takes a long time to do.

Making Apple TV Better

I just saw this article on AppleInsider which stated (not surprisingly) that first year Apple TV sales are coming in far below expectations. While I am definitely an Apple fan, I don’t currently own an Apple TV. I also don’t own a HDTV, but, if I did, the key to me purchasing the device would be video rentals.

Right now, as it currently stands, Apple TV is not a good value proposition for me. First off, I’ve only purchased one video from the iTunes store, and that was because something got screwed up on the DVR and my wife wound up missing an episode of Grey’s Anatomy. I already pay enough for my satellite TV, and am not about to drop it in favor of purchasing individual shows via iTunes. On top of that, the only physical video media I’ve purchased have been kids DVDs. I have, however, rented a few movies. More often than not, this has been associated with some plane travel where I’d really just like to put it on my iPhone. I’ve never done the NetFlix thing, but I think something without a due date but with restrictions on number of times it can be viewed would work best. Trying to find 2+ hours to watch a movie with three young kids running around the house is all but impossible.

The second feature that I’d like to see would be a device that would only be a streaming video recipient that could hook up to the TV. How I really want my video to work is to have one central server that has all of my video and can then stream it to any TV in the house. I’d rather not have to have computers hooked up to each one, however. Take Apple TV, strip out the hard drive, leaving just the HDMI out and the wireless connectivity, and now it’s at least getting tempting. You could either use it in conjunction with Apple TV (if you need external storage of your video) or simply with iTunes on a Mac/PC in your house. It would be the video equivalent to AirPort Express where it connected to your stereo system.

In reality, the best scenarion would if Apple would release a real Apple TV as a competetively priced 42″ or 46″ 1080p LCD TV to the higher end competition that contained a hard drive and wireless capabilities. While I doubt they’d do it, Apple could OEM it to other manufacturers, since the integration with the display should be pretty standard at this point and not get in the way of the user experience. Add in DVR capabilities, the ability to sync recorded programs on the DVR with iTunes so I can move them to my iPod/iPhone or watch them on another computer or TV, and the ability to auto-tune my satellite receiver, and I’d be all over it. Who knows, maybe Steve will announce something close at the next MacWorld.

Gartner EA Summit: Closing Session

I’m checked out now, and in the closing session which is an open research panel with four Gartner panelists. They’re throwing up a statement and then debating it. The first prediction is “Through 2012, 40% of EA programs will be stopped due to poor execution.” The audience seemed to be in favor of the statement, although I wasn’t. Some of the audience comments had to do with lack of funding, lack of support, confusion with the value proposition for EA. I think it’s likely that 40% of the EA programs may change in how the company is attacking the problem (in fact, probably even more than that), but I’d be surprised if the program was abolished altogether.

The next statement is “By 2010, 70% of EA teams will be forced to spend as much time on information architecture as they currently spend on technical architecture.” About 70% of the audience agreed with this statement. I differ slightly. I definitely think the emphasis on information architecture will increase, but I also think some of the technical architecture may decrease. So, I would say that information architecture will probably receive equal treatment to technical architecture, but probably not as much as what technical architecture receives today. Interestingly, Nick Gall asked how many of us expect that Business Process Architecture will increase, more so than Information Architecture, in the next few years, and about 70% of us (myself included) agreed.

The final statement is “By 2010, the primary focus of technology architecture will shift from defining product standards to identifying and describing shared and repeatable technical services.” About 70% of the audience agreed with this. The key word that I disagree with on this one is the use of the term “technical services”. If your definition of this term includes “business services” exposed through technology, then I’d agree. If we’re talking about capabilities in the technical domains, like security, routing, etc., then I disagree. I think they are using the former definition, so I would agree with this one.

Time to head to the airport. I’d like to publicly thank the SOA Consortium and Pascal Winckel with Gartner for giving me the opportunity to be a speaker and for putting on two great summits. I especially enjoyed the EA Summit and hope to attend again.

Gartner EA Summit: Logical and Conceptual Models for Security Architecture

This session is being presented by Tom Scholtz. His opening message is that we have to avoid one-size-fits-all security solutions and that we need to think strategically, otherwise we’ll always be behind the risk management curve. The approach he’s advocated is very consistent with EA: Plan, Build, Govern, Run, and back to Plan again.

He’s now talking about the organization model for information security in the future. The first item is that he recommends moving the corporate information security team outside of IT to increase the message that security is a corporate issue, not just an IT issue. This team would be involved with risk management, policy management, program management, business continuity management, architecture, and awareness. This group would report to a Corporate Risk Manager. Within IT, reporting to the CIO, there would be an IT Information Security Team looking at risk assessment, design and implementation, disaster recovery plan, security operations, and vulnerability assessments. Within business units, they would have local continuity plans, awareness, and policy management. Tying it all together is governance.

He’s now recommending that we become more process centric about security. In the vertical dimension, there are four key protection processes:

  • Identity and Access Management
  • Network Access Control
  • Vulnerability Management
  • Intrusion Prevention

In the horizontal dimension, he has strategic processes:

  • Risk and Policy Management
  • Security Architecture
  • Business Continuity
  • Relationship Management

Not much more to report on this one, as I need to skip out early and check out of the hotel to catch my flight home. This definitely looks to be more appropriate to an ISO manager than someone closer to the technology like me, but I’ll have to review the notes in the conference materials to get more detail.

Here Comes Another Bubble

Courtesy of Brandon Satrom, who I met here at the Gartner Summit, comes this hilarious YouTube video. If you don’t see it below, click here to go to YouTube.

[kml_flashembed movie=”http://www.youtube.com/v/I6IQ_FOCE6I” width=”425″ height=”350″ wmode=”transparent” /]

Gartner EA Summit: Communication for EA

In this session, Robert Handler is giving a talk entitled Communication, Persuasion, and Interpersonal Skills for EA. In a previous job, I worked with someone who was passionate about communications. He helped us create a communication plan around SOA and our competency center, and I really think it made a huge difference in our efforts, so I had a keen interest in this topic. I’ve previously blogged on some of the subjects in this presentation, including my Focus on the consumer entry.

Robert is spending a lot of time talking about Marketing and Sales, which is a great approach for this subject. For example, one task in marketing is identifying and segmenting it. Robert correctly points out the different segments for EA (senior leaders, business unit leaders, IT leaders, and IT groups) all want different things. Great point. I’ve met many technical people who want to create one thing and expect that everyone will see the inherent value in it when there hasn’t been any effort to tailor it or present it according to the differing needs.

He’s also spoken about creating the delivery system. Are you going to use a web portal? Wikis? Think about how your EA deliverables will be sent out to your markets. He’s also addressed branding and its importance.

On the sales side, he’s begun with a discussion on stakeholder analysis, and a detailed level at that. Again, it’s about understanding how to communicate and sell to the particular personalities and how they make decisions. Do they want a lot of detail? Do they want structure? Do they want to hear more about people and social issues, or do they want specific tasks? Again, there’s no one approach, it’s about matching the right approach to the right person.

He’s now talking about persuasion, and presenting techniques from the work of Dr. Robert Cialdini. He defines persuasion loosely as getting people to reply to your requests. The principles associated with persuasion include:

  • Contrast principle: You can change the way someone experiences something by giving them a contrasting experience first.
  • Principle of reciprocation: people feel obligated to give back somethign similar to what was given to them.
  • Principle of scarcity: People want what they can’t have.
  • Principle of authority/credibility: People defer decisions to experts, legitimate or otherwise.
  • Principle of trust: Admit a flaw or weakness to show you are trustworthy.
  • Principle of consistency: People want consistency and to be viewed as consistent.
  • Principle of liking: People like those who like them, pleasant associations is the same as liking, people say yes to those who they believe are cooperating.

Message: keep these principles in mind as you sell EA. For example, he told us about a group that had little badges with LED lights that were given to projects that worked well with EA. The EA team was very stingy with them, however, adhering to the principle of scarcity. The result was that teams worked that much harder to be compliant, because they wanted these $2 badges that they could have ordered from Oriental Trading Company.

I’ve been really impressed with both of Robert’s sessions that I’ve attended. I’ve have to look into more of his research and recommend that other Gartner clients out there do so as well.

Gartner EA Summit: The Case of the Irritating Architect

This is the afternoon keynote in the EA Summit being given by Susan Cramm, the Founder and President of Valuedance, and CIO Magazine columnist. Some takeaways from her true story of Jerry, the Irritating Architect.

  • People will choose likability over competence any day.
  • Motivation is built upon respect, connectivity, and pride. She used these three concepts then in the key phases of architecture: strategize, architect, lead, govern, and communicate.
  • You can’t influence people you don’t know.

Overall, it was a good presentation. It definitely delivered the message of adding value to others and enabling them, rather than the opposite extreme of taking power from others and disabling them. Elements of Jerry that she described made me think of a very good book I’ve read, “The No A******e rule.”

Gartner EA Summit: Beyond the Business Case- Projects in the Enterprise Architecture

In this session, Robert Handler (of Gartner) is talking about the relationship between Enterprise Architecture and Project Portfolio Management (PPM). First off, there is clear overlap between the two efforts, provided that your PPM efforts are involved with the actual project definition and approval efforts, and not simply involved after project are approved and underway. Both efforts have the common desire to define projects that are intended to progress the company from a current state to a future state, although the criteria involved in project selection probably varies between the two.

On the PPM side, the biggest challenge is that this often isn’t happening. The survey quoted in the slides showed that the bulk of the time in PPM is spent mediating discussions on project priority, and just over 50% of the projects are even under the purview of the PPM effort. On the EA side, the slide states that most efforts are “mired in the creation of technical standards,” operating too much in a reactionary mode to current projects, and have very little effort on gap analysis. So, the end result is that neither effort is necessarily meeting their objectives, especially in the areas where they overlap, and neither effort is communicating well with the other.

He’s now showing an anchor model, and demonstrating how projects can be mapped onto the anchor model to show areas of concern and overlaps. I mentioned this anchor model in my blog entry on the Beginner’s Guide to Business Architecture session, and it’s jumping out again. This is definitely something that I need to get more information on, and hopefully start leveraging. He also presented a common requirements matrix and scoring approach which can assist in prioritization. The one challenge I see with this latter approach is that the projects all have to come along at the same time, which isn’t always the case. We’ll see if he gets to my question on this subject that I just submitted. (Note: he did, and pointed out that it doesn’t assume that everything comes in the same time, but that you do need to be willing to make adjustments to in-flight projects such as removing resources, altering scope, etc.)

Back to EA side of things, he’s advocating the use of patterns, principles, models, and standards as part of the architectural guidance that projects use. No arguments from me here. His slide also states that resource utilization in one organization went from 67% to over 80% when these are used effectively.

His closing recommendations are pretty straightforward. First, EA needs to be coordinated with PPM activities. Someone needs to take the first step to establish some synergies. Second, use coarse-grained EA deliverables for better project selection criteria. Third, use fine-grained EA deliverables on projects as gating factors. Fourth, capture some baselines and measure overall improvements, such as how long the design phase takes, productivity, etc. Finally, evolve maturity and effectiveness from where you are toward the ideal.

Overall, this was a very good session. It could have been a bit more prescriptive, but in terms of clearly showing the relationship between PPM and EA, it hit the mark.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.