Archive for the ‘Integration’ Category

New Compilation Book and Possible EA Book

While I have not yet embarked on writing another book, I have been published in a second book. The publisher of my book on SOA Governance, Packt Publishing, has released their first compendium title called, “Do more with SOA Integration: Best of Packt.” It features content from several of their SOA books and authors, including some from my book on SOA Governance. If you’re looking for a book that covers a more broader perspective on SOA, but has some great content on SOA Governance as a bonus, check it out.

On a related note, I’ve been toying with the idea of authoring another book, this time on Enterprise Architecture. There are certainly EA books on the market, so I’m interested in whether all of you think there are some gaps in the books available. If I did embark on this project, my goal would be similar to my goal on my SOA Governance book: keep it easily consumable, yet practical, pragmatic, and valuable. That’s part of the reason that I chose the management fable style for SOA Governance, as a story is easier to read than a reference manual. If I can find a suitable story around EA, I may choose the same approach. Please send me your thoughts either by commenting on this post, or via email or LinkedIn message. Thanks for your input.

Governance Needs for Cloud Services

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum started a debate when he posted a blog with the attention grabbing headline of “Cloud computing will kill these 3 technologies.” One of the technologies listed was “design-time service governance.” This led to a response from K. Scott Morrison, CTO and Chief Architect at Layer 7, as well as a forum debate over at eBizQ. I added my own comments both to Scott’s post, as well the eBizQ forum, and thought I’d post my thoughts here.

First, there’s no doubt that the run-time governance space is important to cloud computing. Clearly, a service provider needs to have some form of gateway (logical or physical) that requests are channeled through to provide centralized capabilities like security, billing, metering, traffic shaping, etc. I’d also advocate that it makes sense for a service consumer to have an outgoing gateway, as well. If you are leveraging multiple external service providers, centralizing functions such as digital signatures, identity management, transformations, etc. makes a lot of sense. On top of that, there is no standard way of metering and billing usage yet, so having your own gateway where you can record your own view of service utilization and make sure that it’s line with the what the provider is seeing is a good thing.

The real problem with Dave’s statement is the notion that design-time governance is only concerned with service design and development. That’s simply not true. In my book, I deliberately avoided this term, and instead opted for three timeframes of governance: pre-project, project, and run-time. There’s a lot more that goes on before run-time than design, and these activities still need to be governed. It is true that if you’re leveraging an external provider, you don’t have any need to govern the development practices. You do, however, still need to govern:

  • The processes that led to the decision of what provider to use.
  • The processes that define the service contract between you and the provider, both the functional interface and the non-functional aspects.
  • The processes executed when you add additional consumers at your organization of externally provided services.

For example, how is the company deciding what service provider to use? How is the company making sure decisions by multiple groups for similar capabilities are in line with company principles? How is the company making sure that interoperability and security needs are properly addressed, rather than being left at the whim of what the provider dictates? What happens when a second consumer starts using the service, yet the bills were being sent to the first consumer? Does the providers service model align with the company’s desired service model? Does the provider’s functional interface create undue transformation and integration work for the company? These are all governance issues that do not go away when you switch to IaaS, SaaS, or PaaS. You will need to ensure that your teams are aware of the contracts in place, and don’t start sending service requests without being properly onboarded into the contractual relationship. Your internal allocation of charges takes multiple consumers into account, if necessary. All of these must happen before the first requests are sent in production, so the notion that run-time governance is the only governance concern in a cloud computing scenario is simply not true.

A final point I’m adding on after some conversation with Lori MacVittie of F5 on Twitter. Let’s not forget that someone still needs to build and provide these services. If you’re a service provider, clearly, you still have technical, design-time governance needs in addition to everything else discussed earlier.

SOI versus SOA

Anne Thomas Manes’ “SOA is dead” post back at the beginning of the year sparked quite a debate, which is still going strong. On the Yahoo SOA group, the question was asked on exactly what Anne meant by SOI, or Service-Oriented Integration. Here’s my response:

SOI, service oriented integration, is probably best stated as WSOI- Web Services Oriented Integration. It’s simply the act of taking the same integration points that arise in a project and using web services or some other XML over HTTP approach to integrate the systems. Could this constitute a service-oriented application architecture? Absolutely, but in my mind, there is at best incremental benefits in this approach versus some other integration technology.

Because the scope is a single application, it’s unlikely that any ownership domains beyond the application itself were identified, so there won’t be anyone responsible for looking for and removing other redundant service implementations. Because the scope of the services involved didn’t change, only the technologies used, it’s unlikely that the services will have any greater potential for reuse than they would with another integration technology except that XML/HTTP will be more interoperable, than say, Java RMI, if that’s even a concern. To me, SOA must be applied at something larger than a single application to get anything better than these incremental gains. Services should be defined along ownership domains that create accountability for driving the redundancy out of the enterprise where appropriate.

This is why an application rationalization effort or application/service portfolio management is a critical piece of being successful. If it’s just a “gut feel” that there is a lot of waste in the IT systems, arbitrary use of a different integration technology won’t make that go away. Only working to identify the areas of redundancy/waste, defining appropriate ownership domains, and then driving out the redundancy through the use of services will make a significant difference.

Most Read Posts for 2008

According to Google Analytics, here are the top read posts from my blog for 2008. This obviously doesn’t account for people who read exclusively through the RSS feed, but it’s interesting to know what posts people have stumbled upon via Google search, etc.

10. Governance Does Not Imply Command and Control. This was posted in August of 2008, and intended to change the negative opinion many people have about the term “governance.”

9. To ESB or not to ESB. This was posted in July of 2007, and gave a listing of five different types of ESBs that exist today and how they may (or may not) fit into your environment.

8. Getting Started with SOA Governance. This was posted in September of 2008, just before my book was released. It emphasizes a policy first approach, stressing education over enforcement.

7. Dish DVR Upgrade. This was posted in November of 2007 and had little to do with SOA. It tells the story of how Dish Network pushed out an upgrade to the software on their DVRs that wiped out all of my existing timers, and I missed recording some shows as a result. The lesson for IT: even if you think there’s no chance that a change will impact someone, you still should make them aware that a change is occurring.

6. Most popular posts to date. This is rather humorous. This post from July of 2007 was much like this one. A list of posts that Google Analytics had shown as most viewed since January of 2006. Maybe this one will show up next year. It at least means someone enjoys these summary posts.

5. Dilbert’s Guide to Governance. In this post from June of 2007, I offered some commentary on governance in the context of a Dilbert cartoon that was published around the same timeframe.

4. Service Taxonomy. Based upon an analysis of search keywords people use that result in them visiting my pages, I’m not surprised to see this one here. This was posted in December of 2006, and while it doesn’t provide a taxonomy, it provides two reasons for having taxonomies: determining service ownership and choosing the technical implementation platform. I don’t think you should have taxonomies just to have taxonomies. If the classification isn’t serving a purpose, it’s just clutter.

3. Horizontal and Vertical Thinking. This was posted in May of 2007 and is still one of my favorite posts. I think it really captures the change in thinking that is required for more strategic solutions, however, I also now realize that the challenge is in determining when horizontal thinking is needed and when it is not. It’s not an easy question and requires a broad understanding of the business to answer correctly.

2. SOA Governance Book. This was posted in September of 2008 and is when I announced that I had been working on a book. Originally, this had a link to the pre-order page from the publisher, later updated to include direct links there and to the page on Amazon. You can also get it from Amazon UK, Barnes and Noble, and other online bookstores.

1. ITIL and SOA. Seeing this post come in at number one was a surprise to me. I’m glad to see it up there, however, as it is something I’m currently involved with, and also an area in need of better information. There are so many parallels between these two efforts, and it’s important to eliminate the barriers between the developer/architecture world of SOA and the infrastructure/operations world of ITIL/ITSM. Look for more posts on this subject in 2009.

Best of Breed or Best Fit?

I saw the press release from SoftwareAG that announced their “strategic OEM partnership” with Progress Software for their Actional products.  While I’m not going to comment on that particular arrangement, I did want to comment on the challenge that we industry practitioners face when trying to leverage vendor technologies these days.

There has been a tremendous amount of consolidation in the SOA space.  There’s also been a lot of consolidation in the Systems Management space, another area where I pay a lot of attention. Unfortunately, the challenge still comes down to an integration problem. The smaller companies may be able to be more nimble and add desired capabilities.  This approach is commonly referred to as a “best of breed” approach, where you pick the product that is the best for the immediate needs in a somewhat narrow area.  Eventually, you will need to integrate those systems into something larger.  This is where a “best fit” approach sometimes comes into play.  Here, the desire is to focus more on breadth of capability than on depth of capability.

The definition of what is appropriate breadth is always changing, which is why many of the “best fit” vendors have grown by acquisition rather than continued enhancements and additions to their own solutions.  Unfortunately, this approach doesn’t necessarily make the integration challenges go away.  Sometimes it only means that a vendor is well positioned to offer consulting services as part of their offering, rather than having to go through a third party systems integrator.  It does mean that the customer has a “single throat to choke,” but I don’t know about you, I’d much rather have it all work and not have to choke anyone.

This recent announcement is yet another example of the relationships between vendors that can occur.  OEM relationships, rebranding, partnerships, etc.  Does it mean that we as end users get a more integrated product?  I think the answer is a firm maybe.

The only way that makes sense to me is to always retain control of your architecture.  It doesn’t do any good to ask the questions, “Does your product integrate with foobar?” or “How easy is it to integrate with such-and-such?”  You need to know the specifics of where and how you want these systems to integrate, and then compare that to what the vendors have to say, whether it’s all within their own suite of branded products or involves partners and OEM agreements.  The more specifics you have the better.  You may find that highly integrated suites, perhaps are integrated in name only, or maybe you’ll find that the suite really does operate as a well-oiled machine.  Perhaps you’ll see a small vendor that has worked their tail off to integrate seamlessly into a larger ecosystem, and perhaps you’ll find a small vendor that is best left as an island in the environment.

Then, after getting answers, go through a POC effort to actually prove it out and get your hands dirty (you execute the POC, not the vendor).  There are many choices involved in integrating these systems, such as what the message schemas will be, and the mechanisms of the integration itself- are you integrating “at the glass” via cut and paste between applications?  Are you integrating in the middle via service interactions in the business tier?  Or are you integrating at the data layer, either through direction database access or through some data integration/MDM-like layer?  Just those questions alone can cause significant differences in your architecture.  The only way you’ll see what’s really involved with the integration effort is to sit down and try it out, just do so after first defining how you’d like it to work through a reference architecture, then questioning the vendors on how well they map to your reference architecture, and finally by getting your hands dirty in a POC and actually trying to make it work as advertised in those discussions.

Integration Competency Centers and SOA

Lorraine Lawson of IT Business Edge had a post last week that linked to my previous posts on Centers of Excellence and Competency Centers entitled, “The Best Practice That Companies Ignore.” In this article, she references an eBizQ survey that revealed that only 9% of respondents had a competency center or center of excellence. While she wasn’t surprised at this, she was surprised at recent comments from Ken Vollmer of Forrester that said the same is true for Integration Competency Centers, a concept that has been around for several years. In her discussion with Ken, she states he indicated that “any organization with mid-to-high-level integration issues could benefit from an ICC.” My take on the discussion was that Ken feels that every mid to large organization should have one (my opinion, neither he nor Lorraine stated this).

The real issue I had with some of the justifications for having an ICC was an underlying assumption that intergration is a specialized discipline. While this was the case 8-10 years ago, I think we’ve made significant progress. I actually think there is a specific detriment that an ICC can have to an SOA effort. When an ICC exists, integration is now someone else’s problem. I worry about my world, and I leave it up to the integration experts to make my world accessible to everyone else. It’s this type of thinking that will doom an SOA effort, because everyone’s first concern is themselves, not everyone else. To do SOA right, your service teams should be consumer-focused first.

Regarding ICCs, the reason I don’t think there is broad adoption of the concept is that majority of companies, even large enterprises, only have one or two major systems that represent 80% of the integration effort, typically either mainframe integration or ERP integration. Companies that have grown via acquisition may have a much more difficult problem with multiple mainframes, multiple ERP systems, etc., and for them, ICCs are a good fit. I just don’t think that’s 80% of the mid-to-large businesses.

The last piece of the message, and where she linked to my posts, deals with whether or not the ICC should temporary or not. Ken’s comment was that there are always new integration tools coming out, and the ICC should be responsible for them. I don’t agree with this. There are also new development tools coming out, and I don’t see companies with a development competency center. Someone does have to be responsible for integration technologies, but this could easily be part of the responsibilities for a middleware technology architect.

Applying the same argument to SOA, again, if it’s technology-focused, I don’t buy it. If we get into the space of SOA Advocacy and Adoption, then I think there’s some value. Clearly, individual projects building services does not constitute SOA. Given that, who is guiding the broader SOA effort? Perhaps what is ultimately needed is a SOA Advocacy Center or SOA Adoption Center that is repsonsible for seeing it forward. There’s no formula for this, though. A person dedicated to being the SOA Champion with excellent relationships in the organization could potentially do this on their own. Ultimately, this become just like any other strategic initiative. To acheive the strategy, the organization must put proper leadership in place. If it’s one person, great. If it’s a standing committee, great. Just as long as it is positioned for success. Putting one person in charge who lacks the relationships won’t cut it, but putting a committee together to establish those relationships will. Whether it’s permanent or not is dependent on whether the activities can become standard practice, or if there is a continual need for leadership, guidance, and governance.

Integration as a Service

A combination of things, including the Workday acquisition of CapeClear and doing my taxes, got me thinking a bit more deeply about integration as a service (IaaS). When I have my large enterprise hat on, I’ll admit that IaaS doesn’t excite me very much. The main reason for this starts with what I envision when I see this, however. What I picture is taking all of the mapping exercises, transformations, etc. that are done today using some visual tool from an EAI vendor and moving it to a web-based system. I still need to do all of the work to wire things up, but the actual processing behind all of this goes on outside the firewall. This certainly doesn’t make sense if all of my integration points exist within the firewall, and I’d even go further to say that it’s not very attractive when only one of the integration points is outside of the firewall. Why? Well, there are already providers out there that will not only handle the processing, but also do the mapping. Essentially, I give them information on how I want my data, and they take care of doing all the dirty work to map to the other integration point. So, from this perspective, my definition of IaaS actually winds up having me do more work than the traditional “integration as a service” providers.

If all of my integration points are outside of the firewall, things get a bit more interesting. This occurred to me when I was doing my taxes. Most of the leading tax software providers have both desktop versions and online versions. In the case of the online version, the data resides in the cloud. The data behind the tax return must integrate with data from other cloud providers, such as payroll systems and financial services companies. While this integration works well, the same situation isn’t as good for general personal financial planning software. Take Quicken for example. In the desktop version, some of my financial data can’t be automatically imported into Quicken, because the financial provider hasn’t exposed the information according to the necessary standards. Interestingly, however, when I looked into the recently announced Quicken Online, some of the financial providers that didn’t work in the desktop version were able to be integrated into the online version. I suspect that there may be some screen scraping going on, as is the case for many financial aggregators like Yahoo Finance.

Anyway, the short point to that long winded paragraph is that when all of the data exists in the cloud, there’s no doubt that the need to integrate that data for one purpose or another will soon follow. The key question then, however, is whether services should be provided to allow individuals to create their own integration paths, or if the service providers will be expected to simply integrate with other leading offerings, much in the same way we expect the software we install inside our firewalls or inside our homes to integrate. I suspect it will be the latter. Whether its the consumer market or the enterprise market, I think we all want the difficult integration problems to be handled for us. The vendors that are able to do so, will be successful.

Registries, Repositories, and Bears, oh my!

Okay, no bears, sorry. I read post from my good friend Jeff Schneider regarding SAP’s Enterprise Service Repository (ESR). He states:

At the core of the SAP SOA story is the Enterprise Service Repository (ESR). It is actually a combination of both registry and repository. The registry is a UDDI 3.0 implementation and has been tested to integrate with other registries such as Systinet. But the bulk of the work is in their repository. Unlike other commercial repositories, the first thing to notice is that SAP’s is pre-populated (full, not empty). It contains gobs of information on global data types, schemas, wsdl’s and similar artifacts relating to the SAP modules.

This now brings registry/repository into the mix of infrastructure products that SAP customers must make decisions regarding adoption and placement. Do they leverage what SAP provides, or do they go with more neutral products from a pure infrastructure provider such as BEA, HP, SOA Software, or SoftwareAG/WebMethods? The interesting thing with this particular space is that it’s not as simple as picking one. Jeff points out that the SAP ESR comes pre-populated with “gobs of information” on assets from the SAP modules. Choose something else, and this metadata goes away.

I hope that this may bring some much needed attention to the metadata integration/federation space. It’s not just a need to integrate across these competing products, but also a need to integrate with other metadata systems such as configuration management databases and development lifecycle solutions (Maven, Rational, Subversion, etc.). I called this Master Metadata Management in a previous post.

Back when Gartner was pushing the concept of the ESB heavily, I remember an opening keynote from Roy Schulte (I think) at a Web Services Summit in late 2005. He was emphasizing that an organization would have many ESBs that would need to interoperate. At this point, I don’t think that need is as critical as the need for our metadata systems to interoperate. You have to expect that as vendors of more vertical/business solutions start to expose their capabilities as services, they are likely to come with their own registry/repository containing their metadata, especially since there’s no standard way to just include this with a distribution and easily import it into a standalone RR. It would be great to see some pressure from the end-user community to start making some of this happen.

Composite Applications

Brandon Satrom posted some of his thoughts on the need for a composite application framework, or CAF, on his blog and specifically called me out as someone from which he’d like to hear a response. I’ll certainly oblige, as inter-blog conversations are one of the reasons I do this.

Brandon’s posted two excerpts from the document he’s working on, here and here. The first document tries to frame up the need for composition, while the second document goes far deeper into the discussion around what a composite application is in the first place.

I’m not going to focus on the need for composition for one very simple reason. If we look at the definition presented in the second post, as well as articulated by Mike Walker in his followup post, composite applications are ones which leverage functionality from other applications or services. If this is the case, shouldn’t every application we build be a composite application? There are vendors out there who market “Composite Application Builders” which can largely be described as EAI tools focused on the presentation tier. They contain some form of adapter for third party applications, legacy systems, that allow functionality to be accessed from a presentation tier, rather than as a general purpose service enablement tool. Certainly, there are enterprises that have a need for such a tool. My own opinion, however, is that this type of an approach is a tactical band-aid. By jumping to the presentation tier, there’s a risk that these integrations are all done from a tactical perspective, rather than taking a step back and figuring out what services need to be exposed by your existing applications, completely separate from the construction of any particular user-facing application.

So, if you agree with me that all applications will be composite applications, then what we need is not a Composite Application Framework, but a Composition Framework. It’s a subtle difference, but it gets us away from the notion of tactical application integration and toward the strategic notion of composition simply being part of how we build new user-facing systems. When I think about this, I still wind up breaking it into two domains. The first is how to easily allow user-facing applications to easily consume services. Again, in my opinion, there’s not much different here than the things you need to do to make services easily consumable, regardless of whether or not the consumer is user-facing or not. The assumption needs to be that a consumer is likely to be using more than one service, and that they’ll have a need to share some amount of data across those services. If the data is represented differently in those services, we create work for the consumer. The consumer must translate and transform the data from one representation to one or more additional representations. If this is a common pattern for all consumers, this logic will be repeated over and over. If our services all expose their information in a consistent manner, we can minimize the amount of translation and transformation logic in the consumer, and implement it once in the provider. Great concept, but also a very difficult problem. That’s why I use the term consistent, rather than standard. A single messaging schema for all data is a standard, and by definition consistent, but I don’t think I’ll get too many arguments that coming up with that one standard is an extremely difficult, and some might say impossible, task.

Beyond this, what other needs are there that are specific to user-facing consumers? Certainly, there are technology decisions that must be considered. What’s the framework you use for building user-facing systems? Are you leveraging portal technology? Is everything web-based? Are you using AJAX? Flash? Is everything desktop-based using .NET and Windows Presentation Foundation? All of these things have an impact on how your services that are targeted for use by the presentation tier must be exposed, and therefore must be factored into your composition framework. Beyond this, however, it really comes down to an understanding of how applications are going to be used. I discussed this a bit in my Integration at the Desktop posts (here and here). The key question is whether or not you want a framework that facilitates inter-application communication on the desktop, or whether you want to deal with things in a point-to-point manner as they arise. The only way to know is to understand your users, not through a one-time analysis, but through continuous communication, so you can know whether or not a need exists today, and whether or not a need is coming in the near future. Any framework we put in place is largely about building infrastructure. Building infrastructure is not easy. You want to build it in advance of need, but sometimes gauging that need is difficult. Case in point: Lambert St. Louis International Airport has a brand new runway that essentially sits unused. Between the time the project was funded and completed, TWA was purchased by American Airlines, half of the flights in and out were cut, Sept. 11th happened, etc. The needs changed. They have great infrastructure, but no one to use it. Building an extensive composition framework at the presentation tier must factor in the applications that your users currently leverage, the increased use of collaboration and workflow technology, the things that the users do on their own through Excel, web-based tools, and anything else they can find, how their job function is changing according to business needs and goals, and much more.

So, my recommendations in this space would be:

  1. Start with consistency of data representations. This has benefits for both service-to-service integration, as well as UI-to-service integration.
  2. Understand the technologies used to build user-facing applications, and ensure that your services are easily consumable by those technologies.
  3. Understand your users and continually assess the need for a generalized inter-application communication framework. Be sure you know how you’ll go from a standard way of supporting point-to-point communication to a broader communication framework if and when the need becomes concrete.

Cool iPhone Feature

On a whim, I just determined that if you navigate to a Google Maps URI within Safari on the iPhone, it will launch the Google Maps application on the iPhone, rather than staying within Safari. It even works for directions. For example, click on this link from your iPhone, and you’ll get directions from the St. Louis Airport to Busch Stadium. This is a pretty slick way of providing integration between the native iPhone apps and the Web.

SOA in the home

I’ve previously posted on SOA in the home. Well, Peter Rhys Jenkins of IBM is doing it. I heard him speak once in St. Louis and he was very entertaining. Anyway, here’s the article on what’s he doing in his house.

Integration at the Desktop, Part 2

In addition to commenting on my blog, Francis Carden, CEO of OpenSpan, also was kind enough to give me a short demo of their product. In my previous post, I introduced the concept of a “Desktop Service Bus” and wondered if the product would behave in this fashion. One of the interesting things I hadn’t thought of, however, is exactly what a desktop service bus should behave like? For that matter, what’s the right model of working with an enterprise service bus? More on that in a second.

Francis did a nice little demonstration for me that showed how custom integrations could be built quickly, first by interrogating existing applications (desktop or web-based) and grabbing possible integration points (virtually any UI element on the screen), and then by using a visual editor to connect up components in a pipeline-like manner. If you’re familiar with server-side application integration technologies, think of this tool as providing an orchestration environment, as well as the ability to build adaptors on the fly through interrogation.

Clearly, this is a step in the right direction. Francis made a great comment to me, which was, “People stopped thinking about this [desktop integration] because they’d long forgotten it was possible.” He’s right about this. With the advent of web-based applications, many people stopped talking about OLE and other desktop application integration techniques. The need hasn’t gone away, however. Again, using the iPhone as an example, many people complain about its lack of cut-and-paste capabilities.

Bringing this back to my concept of a desktop service bus, there clearly is a long way to go. When I see tools like OpenSpan or Apple’s Automator, it’s clear that they’re targeted at when a need to integrate is determined after the fact. You have two systems that no one had thought of integrating previously, but now there is a need to do so. This is no different than integration on the server side, except that you’re much more likely to hear the term “silo” used. When I think about the concept of a desktop service bus, or even an enterprise service bus for that matter, the reason a usage metaphor doesn’t immediately come to mind is that it’s not the way we’ve traditionally done things. When we’re building a new solution, the collection of services available should simply be there. There’s a huge challenge in trying to organize them, but if we can organize all of the classes in the Java API’s and all of the variety of extensions through intelligent code completion, why can’t we do the same with services, whether available through a network interaction or through desktop integration? It will take a while before this becomes the norm, but thankfully, I think the connectivity of the web is actually helping in this regard. Users of sites like Flickr, Facebook, Twitter, MySpace and the like expect the ability to mash and integrate, whether with their mobile phones, their desktop machines, other web sites, and more. Integration as the norm will be a requirement going forward.

Integration at the Desktop

One of my email alerts brought my attention to this article by Rich Seeley, titled “Desktop Integration: The last mile for SOA.” It was a brief discussion with Francis Carden, CEO of OpenSpan Inc. on their OpenSpan Platform. While the article was light on details, I took a glance at their web site, and it seems that the key to the whole thing is this component called the OpenSpan Integrator. Probably the best way to describe it is as a Desktop Service Bus. It can tap into the event bus of the underlying desktop OS. It can communicate with applications that have had capabilities exposed as services via the OpenSpan SOA Module, probably through the OpenSpan Studio interrogation capability. This piqued my interest, because it’s a concept that I thought about many years ago when working on an application that had to exist in a highly integrated desktop environment.

Let’s face it, the state of the art in desktop integration is still the clipboard metaphor. I cut or copy the information I want to share from one application to a clipboard, and then I paste it from the clipboard into the receiving application. In some cases, I may need to do this multiple times, one for each text field. Other “integrated” applications, may have more advanced capabilities, typically a menu or button labeled “Send to ABC…” For a few select things, there are some standard services that are “advertised” by the operating system, such as sending email, although it’s likely that these are backed by operating system APIs put in place at development time. As an example, if I click on a mailto: URL on a web page, that’s picked up by the browser, which executes an API call to the underlying OS capabilities. The web page itself can not publish a message to a bus on the OS that says, “Send an email to user joe@foobar.com with this text.” This is in contrast to a server-side bus where this could be done.

In both the server-side and the desktop, we have the big issue of not knowing ahead of time what services are available and how to represent the messages for interacting with them. While a dynamic lookup mechanism can handle the first half of the problem, the looming problem of constructing suitable messages still exists. This still is a development time activity. Unfortunately, I would argue that the average user is still going to find an inefficient cut and paste approach less daunting than trying to use some of the desktop orchestration tools, such as Apple’s Automator for something like this.

I think the need for better integration at human interaction layer is even more important with the advances in mobile technology. For example, I’ve just started using the new iPhone interface for FaceBook. At present, there is no way for me to take photos from either the Photos application or the Camera application and have them uploaded to FaceBook. If this were a desktop application, it isn’t much better, because the fallback is to launch a file browser and require the user to navigate to the photo. Anyone who’s tried to navigate the iPhoto hierarchy in the file system knows this is far from optimal. It would seem that the right way to approach this would be to have the device advertise Photo Query services that the FaceBook app could use. At the same time, it would be painful for FaceBook if they have to support a different Photo Query service for every mobile phone on the market.

The point of this post is to call some attention to the problem. What’s good for the world of the server side can also be good for the human interaction layer. Standard means of finding available services, standard interfaces for those services, etc. are what will make things better. Yes, there are significant security issues that would need to be tackled, especially when providing integration with web-based applications, but without a standard approach to integration, it’s hard to come up with a good security solution. We need to start thinking about all these devices as information sources, and ensuring that our approach to integration handles not just the server side efforts, but the last mile to the presentation devices as well.

Is Apple in the home like Microsoft in the enterprise?

I was just having a discussion with someone regarding Apple’s recovery over the last ten years and what the future holds for them when it dawned on me that there are parallels (sorry, no pun intended) between Microsoft’s efforts in the server-side space in the enterprise and Apple’s efforts in the home.

There’s no doubt that Apple’s strategy has always been about having end-to-end control of the entire platform, from hardware to software. There are advantages and disadvantages to this, with the clear disadvantage being market share, but the advantages being user experience. On the Microsoft side, when they entered into the enterprise market, and this still holds true today, it’s really about getting as much Microsoft software there as possible. They would like to own the software platform from end-to-end.

The parallel in this is that when Microsoft moved beyond the desktop, where they had nearly all of the market share, they suddenly had to deal with a heterogeneous environment rather than a homogenous one. Microsoft’s strategy is not one of integration, however, it is about replacement. Over time, they’ve had to yield to the fact that integration will always be necessary, and that many infrastructures are too well established to incur the cost of a migration to an all Microsoft environment. That being said, Microsoft would be happy to take your money and do it, and they still continue to position their products so that thought is in the back of your mind. I don’t know of anyone who would argue with the statement that Microsoft solutions work best in an all-Microsoft environment. That’s not to say that it doesn’t work really well in a heterogeneous environment, it simply says that if you want the best Microsoft has to offer, you have to go 100% Microsoft.

Now let’s talk about Apple. I’d argue that the state of the market for the integrated, intelligent home is around the same point (maybe a bit less mature) that enterprise infrastructures were when the whole middleware rage occurred in the 90’s. Companies were just starting to realize the potential and the importance of integrating their disparate systems. Today, consumers are just starting to realize the potential of integrating the technology in their houses. I’m not going to make any predictions about when it will become mainstream, as they’re usually wrong, but I do think it’s safe to say that the uptake is definitely increasing in slope rather than remaining flat. Apple is in a very similar position to Microsoft. The home is a heterogeneous environment. Apple works best in an all Apple environment. Will Apple take a path similar to Microsoft to where they integrate where they have to, but are really focused on getting a foot in the door and then it’s all about more Apple? Or will there be careful decisions on where the strategy is about integration and where the strategy is about extending the platform? To date, I think they’ve done the latter. We don’t see an Apple-branded TV, instead we have a set top box that talks to TVs.

The biggest factor may not be what Apple does, but what everyone else does. Microsoft continues to gain market share in the enterprise because integration of heterogeneous environment is still a painful exercise. As look as there is pain in integration, there’s always opportunity for platform-based approaches to gain ground. Integration in consumer technologies is certainly a different beast, as there are standards and a certain level of status-quo. It’s not a painful effort to hook up stereo components from multiple vendors. At the same time, however, it’s ripe for improvements in the experience, case in point, the 100+ button remote control associated with most receivers. Likewise, the standards change all too often. Back when digital camcorders came out, Apple had a big win with integration with iMovie that no one else had. Over the past 8 years however, the digital camcorder manufacturers have changed formats to the point where you can’t say whether a digital camcorder will work with iMovie or not. It just shows that if you don’t control the platform end-to-end, your entire strategy can fall apart quickly based upon those pieces outside of your control.

I think Apple’s taking a very careful approach on what problems to tackle and when. The one thing I’m sure of is that Apple’s presence in the consumer will make the next 10 years in the home very exciting. While one could argue that the availability of the Internet in the home started the process of the demand increasing at a faster pace, I also think you can that Apple’s products, more so than any other consumer products company, have enabled that pace to continue to increase.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.