Archive for the ‘Usability’ Category
In this article from the Wall Street Journal, author Christopher Mims quotes mobile analytics company Flurry’s data that 86% of our time on mobile devices are spent in apps, and just 14% is spent on the web. While Christopher’s article laments that this is the “death of the web”, I’d like to put a different spin on this. We are now entering the age of what I call the “micro-UI”.
The micro-UI represents a shift toward very targeted user experiences focused on a much smaller set of capabilities. A phrase I’ve used is that we are now bringing the work to the user, rather than bringing the user to the work. It used to be that you only had access to a screen when you were in the den of your house with the desk with the built-in cabinet for your “tower” (why do they still sell those?) with a wired connection to your dialup modem, or your computer on your desk at work. Clearly, that’s no longer the case with smart phones, tablets, appliances, your car, and many more things with the capability to dynamically interact with you. I just saw rumors today about the screen resolution of the new Apple Watch, and I think it has higher resolution than my original Palm Pilot back in the late 90’s. On top of that, there are plenty of additional devices that can indirectly interact through low power bluetooth or other tethering techniques.
In this new era, the focus will be on efficiency. Why do I use an app on my phone instead of going to the mobile web site? Because it’s more efficient. Why do notifications now allow primitive actions without having to launch the app? Because it’s more efficient. It wouldn’t surprise me to even see notifications without the app in the future.
For example, how many of you have come home to the post-it on your door saying “FedEx was unable to deliver your package because a signature is required.” Wouldn’t it be great to get a notification on your phone instead that asks for approval before the driver leaves with your package in tow? But do you really want to have to install a FedEx app that you probably will never open? Why can’t we embed a lightweight UI in the notification message itself?
In the enterprise, there are more hurdles to overcome, but that should be no surprise. First, the enterprise is still filled with silos. If it were up to me, I would ban the use of the term “application” for anything other than referring to a user interface. Unfortunately, we’ve spent 30+ years buying “applications,” building silos around them, and dealing with the challenges it creates. If you haven’t already, you need to just put that aside and build everything from here on out with the expectation that it will participate in a highly connected, highly integrated world where trying to draw boundaries around them is a fruitless exercise. This means service-based architectures and context-launchable UIs (i.e. bring the user to the exact point in the user interface to accomplish the task at hand). Secondly, we need to find the right balance between corporate security and convenience. All of this era of connected devices rely on the open internet, but that doesn’t work very well with the closed intranet. Fortunately, I’m an optimist, so I’m confident that we’ll find a way through this. There are simply too many productivity gains possible for it not to happen.
I believe all of this is a good thing. I think this will lead to new and better user experiences, which is really what’s most important. Unlike Christopher’s article, I don’t see this as the death of the web, as without the web as the backing store for all of this information, none of this would be possible. It is a reduction in the use of a browser-based UI, and he’s correct that there are some good things about the web (e.g. linking) that need to be adapted (app linking and switching) to the mobile ecosystem. On the other hand, however, this increased connectivity present opportunities for higher productivity. Apple (e.g. Continuity), Google, Microsoft, and others are all over this.
All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.
I just posted a response to a question about the iPad in an enterprise setting over in an eBizQ forum and decided that I wanted to expand on it here in a blog post.
Much of the discussion about the iPad is still focusing on a feature by feature comparison to a netbook or a laptop. The discussion can not get out of the 20 year old world of keyboards, mice, and the windows and desktop metaphors. To properly think about what the iPad can do, you need to drop all of this context and think about things in new ways. In my previous post on the iPad, I emphasized this point, stating that the iPad is really about taking a new form of interaction (touch, with completely customizable interface) and putting it on a new form factor. In answering the eBizQ question, I realized that it goes beyond that. The key second factor is context awareness.
Back in 2007, I attended the Gartner Application Architecture, Development, and Integration Summit and the concept of “Context-Oriented Architecture” was introduced. In my blog post from the summit, I stated that:
[Gartner] estimates that sometime in the 2010’s, we will enter the “Era of Context” where important factors are presence, mobility, web 2.0 concepts, and social computing.
In that same post, I went on to state that this notion of context awareness will create a need for very lightweight, specific-purpose user interfaces. While at the time, I was leaning toward the use of Dashboard widgets or Vista sidebar items, but guess what has taken over that category? iPhone and iPod Touch apps. Now, we have the potential for a device with a larger form factor that can present a touch-based interface, completely tailored to the task at hand. This is another reason why I don’t see multi-tasking as a big deal. The target for this audience isn’t multi-tasking, it’s for these efficient, single-purpose interfaces. Imagine going into a conference room where your iPad is able to determine your meeting room through sensors in the building, where it knows what meeting you’re in and who else is in the room through calendar integration, it knows the subject of the meeting, and can now present you with a purpose-driven interface for that particular meeting. Our use of information can be made much more efficient. How many times have you been in a meeting only to wind up wasting time navigating around through your files, email, the company portal, etc. trying to find the right information. What if you had an app that organized it all and through context awareness, presented what you needed? The same certainly holds true for other activities in the enterprise beyond meetings. As we have more use of BPM and Workflow technologies, it is certainly possible that context awareness through location, time, presence of others, and more can allow more appropriate and efficient interfaces for task display and execution, in addition to providing context back into the system to aid in continuous improvement.
This isn’t going to happen overnight, but I am very excited to see whether Gartner’s prediction of the 2010’s being the “era of context” comes true. I think it will, and it will be great to look back from 2020 and see just how much things have changed.
Those of you that follow me on Twitter know that my Kindle 1 recently suffered an untimely demise. I had the option of purchasing a refurbished Kindle 1, or getting the new Kindle 2, and I opted for the latter. I thought I’d highlight some of the differences that I’ve noticed for those of you that are considering upgrading and giving your Kindle 1 to another family member or friend.
Ergonomics. Like many Kindle 1 owners, I frequently would pick the device up and hit the next page button, or have it in its case and open it up to find that I had pressed the menu button a few times. That same feature, however, was a plus when I was actually using it. You can hit those buttons just about anywhere and they will respond. In addition to those buttons, the power switch and wireless switch on the back of the device were simply inconvenient. Outside of the buttons, the device had a bit of a flimsy feel to it. While I never had any problems with it, durable would not be the word that would come to your mind. At the same time, the actual shape of the device and its weight was very book-like, which was appealing.
The Kindle 2 is very different. It is much thinner and feels much sturdier. At the same time, there’s a lot more “whitespace” around the screen, which is essentially wasted space. I would have preferred to add thickness rather than width. There’s no problems with accidentally hitting the next page buttons, and the power switch was moved to the top of the device, making it accessible when the device is in its case. The wireless switch was removed entirely and must now be controlled through a menu (I preferred having the physical switch). On the downside, the buttons aren’t as easy to press as on Kindle 1. I was accustomed to hitting the outside edge of the button, which works very well when on an elliptical trainer in the gym, and that won’t work with Kindle 2. You have to press the face of the button. Second, the changes in shape do make the device less book-like, especially when it’s not in its case. With the case on (the Amazon one, which must now be purchased separately), it was less of an issue. Finally, while it is an extra purchase, the latching mechanism for hooking it into the new case is much better. I have not had any issues with it falling out of the case.
Usability/Performance. I really didn’t have any issues with the performance of my Kindle 1. Yes, there’s the flash associated with page turns, but that’s an artifact on any e-reader that uses the eInk technology. Some people felt that there would be too much page flipping, but it didn’t bother me at all. The Kindle 2 performance is noticeably faster, but as I often tell people when discussing performance, the Kindle 1 was already good enough, so this wasn’t a big deal. The second improvement on the Kindle 2 is better grayscale support. If you’re using the Kindle to read technical documents, which I do, then I think this is something that you might find important. The Kindle 1 could only do 4 shades of gray, the Kindle 2 can do 16, and this does make a different. For reading fiction, this is less of an issue. Finally, the Kindle 1 had a mirrored scrollbar that ran parallel to the vertical axis of the screen. You used a scroll-wheel to position it, and clicked it to select. The Kindle 2 replaced the scroll-wheel with a joystick, and did away with the mirrored scrollbar. I assume it’s because the performance of the screen improved, so they felt the scrollbar wasn’t needed. Personally, I liked the scrollbar better. Again, it’s not a huge deal though.
Overall, the Kindle 2 verified my initial thoughts from the original announcement. It’s definitely an incremental improvement, but I don’t think the feature set associated with it is compelling enough for someone to ditch/sell their Kindle 1. There are still some things to work out, such as getting the ergonomics around those page buttons a bit better so they’re still very convenient, but not easily clicked by mistake. If you’re considering a Kindle 2 as your first e-reader, I absolutely recommend it. I love the reading experience on it, I love being able to manage my documents via Amazon, I like that it syncs up where you are within a book if you also have the iPhone Kindle app, and the convenience of the wireless modem for purchasing new content whenever and wherever (if you’re in the US) is great.
If you didn’t know it, today is World Usability Day. Besides the events from the usability team here at work, I was reminded of the importance of usability when I had to press “On” three times on the projector to get it to really come on. Usability has always been a passion of mine, and frankly, it’s importance is still lost on many in IT. Take the time to understand your usability team and how they can improve your solutions. If you don’t have one, consider forming one.
Frequent commenter Rob Eamon suggested a topic for me, hopefully he won’t mind me copying his email verbatim here:
One term that is starting to bug me more and more is “the business.” There is an implicit definition about this group and in IT circles that group is almost always referred to as “they.”
“That’s for ‘the business’ to decide.”
“We need to touch base with ‘the business’ on that.”
IMO, this tends to reinforce divide between groups.
So what exactly is “the business?” Why is IT typically assumed to be excluded from this group? Aren’t all groups in the company part of “the business?” Why do so many refer to “the business” as “internal customers?”
Smaller companies seem to embrace this seemingly arbitrary division much less so than do large companies.
I’m with you Rob. Many organizations have a separation between IT and everyone else, and even worse, it’s always IT referring to everyone else as “the business,” versus the non-IT staff treating IT as if it weren’t part of the company (although that happens too).
Part of the challenge is that IT works with nearly everyone outside of IT, and there isn’t a good term to describe them as a whole. Unfortunately, you hit the nail on the head. By referring to them as “the business,” there’s an implication that IT is not part of “the business.” It’s a shame that this is the case. I remember back in my days of focusing on usability where I had the opportunity to work on a cross-functional team with members of the Internet marketing group on an application. It improved things tremendously when we were able to work as a team, rather than as “the business” and IT. Unfortunately, this still tends to be the exception rather than the norm. So what should we do? Well, I don’t think we need another term to use. What we should be doing is getting off of our IT floors, and actually learning about the rest of the business. When we need to refer to a group outside of IT, refer to the group by their organizational name. If the application is for marketing, don’t call them “the business,” call them marketing. If it is for HR, call them HR.
One other question Rob asked was around the notion of “internal customers.” Like him, I don’t like the customer metaphor when talking about internal IT. The scary thing is, if IT had more of a customer service mentality, things would actually be better. The fact is, IT doesn’t have a good track record of customer service. We get away with lousy service because we’re part of the business. If we were an outsourcing company, we probably would have been shown the door a long time ago. IT should be better, not worse, than an outsourcing group. The only way to achieve this, however, is to get everyone in the company operating as a team, and continuing to emphasize that team mentality. It’s far too easy to get away with poor behavior with those you know, because the ramifications are usually less. As a testament to this, I think of my kids. First off, they’re great kids. But, if you’re a parent, I’m sure you can relate to this. My kids typically have excellent manners when dealing with the parents of their friends. When they come home, though, manners can be forgotten at the door. From a ramifications standpoint, my wife and I would probably be much more upset if they took out a bunch of toys at their friend’s house and didn’t help clean them up than if they took them out at our house and didn’t clean them up. Wouldn’t it be great if we had consistent behavior at both?
Part of this is human nature. This is why the “team” mantra must be something that is continually communicated over and over. The company I work for has a huge emphasis on safety. Not a day goes by without safety being brought up in some discussion. If they were to stop communicating about safety, I’m sure the behavior would eventually trend toward less safety. The same holds true for a team mentality. You can’t spend one year emphasizing teamwork and expect everything to stay that way when you stop. If teamwork is important, make it a part of everything you do, and keep it a part of everything you do. If you want to start from a grass roots effort and you work in IT, stop using the term “the business.” Instead, find out what group in the business you’re referring to, and use their name. I know I still use that term on many occasions, so I’m going to eat my own dog food and try to improve from here on out.
I’ve recently been trying to help out with an issue which has required me to roll up my sleeves a bit, and unfortunately it has brought pack all too familiar memories. Our computer systems (and often the documentation that goes along with it) simply makes things way too difficult. In my opinion, the thing that I’m trying to do should be a relatively straightforward function, but as I’ve dug into it, I just seem to run into an endless set of configuration parameters that need to be specified. Now, while I’m pretty far removed from my days as a developer, I still consider myself tech savvy and a very quick learner. I can’t help but think what the average joe must go through to try to make these things work. It’s almost as if these systems were written to ensure a marketplace for systems integrators and other consultants.
This all comes back to usability and human-computer interaction, areas that have always been a passion of mine. If your products aren’t usable, it’s simply going to create frustration and distrust. If your documentation or support channels are equally poor, the situation will continue to go down. What’s even worse is when there isn’t a workaround, and the only option left to a user is to find an expert who can help. As a user, I don’t like to be painted into a corner where I have no options. We need to keep these things in mind when we build our own systems for our business partners. If we design systems that aren’t usable and require that the business come find someone in IT every time they try to do certain options, that’s a recipe for disaster. If you don’t have a usability team in your organization, I strongly urge you to find some experts and start building one.
While I didn’t attend the later session on it, in the opening keynote of the AADI Summit on Monday, the term “Context-Oriented Architecture” was mentioned. A colleague (thanks Craig!) caught me in the hall and asked me what my thoughts were on it, and as usual, my brain starting noodling away and the end result was me leaving the conversation saying, “you’ll see a blog entry on this very soon!”
I’ve brought up the slides from the Gartner session on it, and they estimate that sometime in the 2010’s, we will enter the “Era of Context” where important factors are presence, mobility, web 2.0 concepts, and social computing. The slides contain a new long acronym, WYNIWYG (Winnie Wig?), which stands for “What You Need is What You Get.” While it seems that the Gartner slides emphasize the importance that mobility will have in this paradigm, I’d like to bring it back into the enterprise context. While mobility is very important, there is still a huge need for WYNIWYG concepts in the desktop context. There will be no shortage of workers who still commute to the office building each day with the computer in their cube or their desktop being their primary point of interaction with the technology systems.
I think the WYNIWYG acronym captures the goal: what you need is what you get. The notion of context, however, implies that what you need changes frequently within a given day. Keith Harrison-Broninski, in his book Human Interactions, discusses how a lot of what we do is driven by the role we are playing at that point in time. If we take on multiple roles during the course of our day, shouldn’t we have context-sensitive interfaces that reflect this? If you’re asking the question, “Isn’t this the same thing as the personalization wave that went in (and out?) with web portals?” I want to make a distinction. Context-sensitivity has to be about productivity gains, not necessarily about user satisfaction gains. Allowing a user to put a “skin” on something or other look and feel tweaks may increase their overall level of satisfaction, but it may not make them any more productivity (I’m not implying that look and feel was the only thing that the personalization wave was about, but there was certainly a lot of it). As a better example of context-sensitivity, I point to the notion of Virtual Desktops. This has been around since the days of X Windows, with the most recent incarnation of it being Apple’s Spaces technology within Leopard. With this approach, I can put certain windows on “Virtual desktops” rather than have all of them clutter up a single desktop. With a keystroke, I can switch between them. So, a typical developer may have one “desktop” that has Eclipse open and maximized, and another “desktop” that has Outlook or your favorite mail client of choice, etc. Putting them all in one creates clutter, and the potential for interruptions and productivity losses when I need to shift (i.e. context-shift) from coding to responding to email.
Taking this beyond the developer, I bring in the advent of BPM and Workflow technologies. I’ve blogged previously on how I think this will create a need for very lightweight, specific-purpose user interfaces. Going a step further, these entry points should all be context-sensitivie. I’m doing this particular task, because I’m currently playing this role. Therefore, somehow, I need to have an association between a task and a role, and the task manager on my desktop needs to be able to interact with the user interaction container (not any one specific user interface, but rather a collection of interfaces) in a context-sensitive manner to present what I need. In our discussion, he brought up an example of an employee directory. An employee directory itself probably doesn’t need to be context aware. What does need to be context-aware is the presence or lack thereof of the employee directory depending on the role I’m currently playing. Therefore, it’s the UI container that must be context-sensitive.
All in all, this was a very interesting discussion. In looking at the Gartner slides, I definitely agree that this is a 2010+ sort of thing, but if you’re in a position to jump out (way) ahead of the curve, there’s probably some good productivity gains waiting to be had. I recommend getting pretty comfortable with your utilization of BPM technology first, and then moving on to this “era of context.”
I recently had a conversation with Ron Schmelzer of ZapThink and we started talking about how the nature of the entry point for enterprise users to interact with the information technology will change in the future. You’ll notice that I didn’t use the term “application” in that sentence and there’s a reason for that. Personally, I want to get rid of it. To me, it implies a monolith. It’s a collection of functionality that by its very nature goes against the notion of agility. When I look at a future state where we’re leveraging BPM, SOA, and Workflow technology, I see very small, lightweight entry points that are short and to the point. I’ve mentioned this before in connection with Vista Gadgets or MacOS X Dashboard Widgets.
Ron brought up a ZapFlash that came out over a year ago that he wrote called “SOA: Enabling the Long Tail of IT.” I didn’t make the connection at the time, but it makes perfect sense now. In the ZapFlash, Ron describes the “Long Tail” this way:
The Long Tail, a term first coined and popularized by Chris Anderson, refers to the economic phenomenon where products that are of interest to only small communities, and thus result in low demand and low sales volume, can collectively result in a large aggregate market. This large collection of small markets can significantly exceed the more traditional market that the most popular and high volume sales items can generate. For example, Amazon.com generates more business in aggregate from its millions of books that each only sell a few copies than they do from the top 100 best sellers that might each sell tens of thousands of units.
One quick way of summing up the Long Tail is by saying that thereís more opportunity in catering to a mass of niche markets than a niche of mass markets. Large enterprises in particular are composed of masses of such niches, operating in different geographies and business units, catering to specific demographics with tailored solutions to meet the needs of all constituents. And yet, the centralized IT organization that serves the needs of the entire organization is typically woefully unprepared to serve these masses of niches: large numbers of users with widely varying IT needs. How, then, can IT support the needs shared in common with all the business groups without overextending its centralized resource to meet the specific needs of each of the individual groups?
Fundamentally, we’re both talking about the same thing. What I describe as very lightweight user-facing entry points are the “long tail” of applications. They’re small, niche solutions that get the job done. Underlying all of this is a robust SOA that are the enablers of these solutions which is loosely-coupled from the user-facing needs. If you think about it, the long tail of application development today is the business user using Excel because they could get done what they needed quickly. I’ve even done this myself, and even progressed up to getting a simple database setup to do a bit more. We shouldn’t be on a quest to squash these out, but rather to figure out how to enable it in a manageable way. The problem is not that somebody’s Excel macro pulling data out of Oracle exists, the problem is that we’re not aware that it exists. Clearly, someone had a need to put it together, and if we can find a way to enable this to where we’re aware of it and our systems support it easily, even better. Personally, I think the technologies we have at our disposal today are on track for making this a reality.
Brandon Satrom posted some of his thoughts on the need for a composite application framework, or CAF, on his blog and specifically called me out as someone from which he’d like to hear a response. I’ll certainly oblige, as inter-blog conversations are one of the reasons I do this.
Brandon’s posted two excerpts from the document he’s working on, here and here. The first document tries to frame up the need for composition, while the second document goes far deeper into the discussion around what a composite application is in the first place.
I’m not going to focus on the need for composition for one very simple reason. If we look at the definition presented in the second post, as well as articulated by Mike Walker in his followup post, composite applications are ones which leverage functionality from other applications or services. If this is the case, shouldn’t every application we build be a composite application? There are vendors out there who market “Composite Application Builders” which can largely be described as EAI tools focused on the presentation tier. They contain some form of adapter for third party applications, legacy systems, that allow functionality to be accessed from a presentation tier, rather than as a general purpose service enablement tool. Certainly, there are enterprises that have a need for such a tool. My own opinion, however, is that this type of an approach is a tactical band-aid. By jumping to the presentation tier, there’s a risk that these integrations are all done from a tactical perspective, rather than taking a step back and figuring out what services need to be exposed by your existing applications, completely separate from the construction of any particular user-facing application.
So, if you agree with me that all applications will be composite applications, then what we need is not a Composite Application Framework, but a Composition Framework. It’s a subtle difference, but it gets us away from the notion of tactical application integration and toward the strategic notion of composition simply being part of how we build new user-facing systems. When I think about this, I still wind up breaking it into two domains. The first is how to easily allow user-facing applications to easily consume services. Again, in my opinion, there’s not much different here than the things you need to do to make services easily consumable, regardless of whether or not the consumer is user-facing or not. The assumption needs to be that a consumer is likely to be using more than one service, and that they’ll have a need to share some amount of data across those services. If the data is represented differently in those services, we create work for the consumer. The consumer must translate and transform the data from one representation to one or more additional representations. If this is a common pattern for all consumers, this logic will be repeated over and over. If our services all expose their information in a consistent manner, we can minimize the amount of translation and transformation logic in the consumer, and implement it once in the provider. Great concept, but also a very difficult problem. That’s why I use the term consistent, rather than standard. A single messaging schema for all data is a standard, and by definition consistent, but I don’t think I’ll get too many arguments that coming up with that one standard is an extremely difficult, and some might say impossible, task.
Beyond this, what other needs are there that are specific to user-facing consumers? Certainly, there are technology decisions that must be considered. What’s the framework you use for building user-facing systems? Are you leveraging portal technology? Is everything web-based? Are you using AJAX? Flash? Is everything desktop-based using .NET and Windows Presentation Foundation? All of these things have an impact on how your services that are targeted for use by the presentation tier must be exposed, and therefore must be factored into your composition framework. Beyond this, however, it really comes down to an understanding of how applications are going to be used. I discussed this a bit in my Integration at the Desktop posts (here and here). The key question is whether or not you want a framework that facilitates inter-application communication on the desktop, or whether you want to deal with things in a point-to-point manner as they arise. The only way to know is to understand your users, not through a one-time analysis, but through continuous communication, so you can know whether or not a need exists today, and whether or not a need is coming in the near future. Any framework we put in place is largely about building infrastructure. Building infrastructure is not easy. You want to build it in advance of need, but sometimes gauging that need is difficult. Case in point: Lambert St. Louis International Airport has a brand new runway that essentially sits unused. Between the time the project was funded and completed, TWA was purchased by American Airlines, half of the flights in and out were cut, Sept. 11th happened, etc. The needs changed. They have great infrastructure, but no one to use it. Building an extensive composition framework at the presentation tier must factor in the applications that your users currently leverage, the increased use of collaboration and workflow technology, the things that the users do on their own through Excel, web-based tools, and anything else they can find, how their job function is changing according to business needs and goals, and much more.
So, my recommendations in this space would be:
- Start with consistency of data representations. This has benefits for both service-to-service integration, as well as UI-to-service integration.
- Understand the technologies used to build user-facing applications, and ensure that your services are easily consumable by those technologies.
- Understand your users and continually assess the need for a generalized inter-application communication framework. Be sure you know how you’ll go from a standard way of supporting point-to-point communication to a broader communication framework if and when the need becomes concrete.
One of my email alerts brought my attention to this article by Rich Seeley, titled “Desktop Integration: The last mile for SOA.” It was a brief discussion with Francis Carden, CEO of OpenSpan Inc. on their OpenSpan Platform. While the article was light on details, I took a glance at their web site, and it seems that the key to the whole thing is this component called the OpenSpan Integrator. Probably the best way to describe it is as a Desktop Service Bus. It can tap into the event bus of the underlying desktop OS. It can communicate with applications that have had capabilities exposed as services via the OpenSpan SOA Module, probably through the OpenSpan Studio interrogation capability. This piqued my interest, because it’s a concept that I thought about many years ago when working on an application that had to exist in a highly integrated desktop environment.
Let’s face it, the state of the art in desktop integration is still the clipboard metaphor. I cut or copy the information I want to share from one application to a clipboard, and then I paste it from the clipboard into the receiving application. In some cases, I may need to do this multiple times, one for each text field. Other “integrated” applications, may have more advanced capabilities, typically a menu or button labeled “Send to ABC…” For a few select things, there are some standard services that are “advertised” by the operating system, such as sending email, although it’s likely that these are backed by operating system APIs put in place at development time. As an example, if I click on a mailto: URL on a web page, that’s picked up by the browser, which executes an API call to the underlying OS capabilities. The web page itself can not publish a message to a bus on the OS that says, “Send an email to user firstname.lastname@example.org with this text.” This is in contrast to a server-side bus where this could be done.
In both the server-side and the desktop, we have the big issue of not knowing ahead of time what services are available and how to represent the messages for interacting with them. While a dynamic lookup mechanism can handle the first half of the problem, the looming problem of constructing suitable messages still exists. This still is a development time activity. Unfortunately, I would argue that the average user is still going to find an inefficient cut and paste approach less daunting than trying to use some of the desktop orchestration tools, such as Apple’s Automator for something like this.
I think the need for better integration at human interaction layer is even more important with the advances in mobile technology. For example, I’ve just started using the new iPhone interface for FaceBook. At present, there is no way for me to take photos from either the Photos application or the Camera application and have them uploaded to FaceBook. If this were a desktop application, it isn’t much better, because the fallback is to launch a file browser and require the user to navigate to the photo. Anyone who’s tried to navigate the iPhoto hierarchy in the file system knows this is far from optimal. It would seem that the right way to approach this would be to have the device advertise Photo Query services that the FaceBook app could use. At the same time, it would be painful for FaceBook if they have to support a different Photo Query service for every mobile phone on the market.
The point of this post is to call some attention to the problem. What’s good for the world of the server side can also be good for the human interaction layer. Standard means of finding available services, standard interfaces for those services, etc. are what will make things better. Yes, there are significant security issues that would need to be tackled, especially when providing integration with web-based applications, but without a standard approach to integration, it’s hard to come up with a good security solution. We need to start thinking about all these devices as information sources, and ensuring that our approach to integration handles not just the server side efforts, but the last mile to the presentation devices as well.
Richard Monson-Haefel posted a great piece on his blog on widgets and gadgets (also posted on the Burton Group APS blog here). It serves as a good introduction to them. After a thorough definition, he primarily focuses on their use in a consumer setting. As a followup, I’d like to see him post more on their role in the enterprise. It’s something I’ve commented on, as well as Om Malik. As I’ve stated previously, I really think they have a potential role in workflow-based solutions as a vehicle for providing lightweight interfaces that are single-purpose in nature, that is, they provide an interface for doing exactly the task that needs to be done, nothing more, nothing less. They start up quickly and they go away quickly. Hopefully Richard will take the bait.
Phil Windley, Scott Lemon, and Ben Galbraith had a nice discussion on the iPhone, Apple’s iLife and iWork, user experience, consumer-friendliness, and much more in the latest IT Conversations Technometria podcast. Sometimes, their best podcasts are simply when they get together and have a discussion about the latest happenings. It was very entertaining, especially the discussion around the iPhone. Give it a listen. Also, make sure you give the Paul Graham essay on “stuff” mentioned by Phil a read.
The latest Briefings Direct: SOA Insights podcast is now available. In this episode, we discussed semantic web technologies, among other things. One of my comments in the discussion was that I feel that these technologies have struggled to reach the mainstream because we haven’t figured out a way to make it relevant to the developers working on projects. I used this same argument in the panel discussion at The Open Group EA Practitioners Conference on July 23rd. In thinking about this, I realized that there is a strong connection in this thinking and SOA. Simply put, it is all about the consumer.
Back when my day-to-day responsibilities were programming, I had a strong interest in human-computer interaction and user interface design. The reason for this was that the users were the end consumer of the products I was producing. It never ceased to amaze me how many developers designed user interfaces as if they were the consumer of the application, and wound up giving the real consumer (the end user) a very lousy user experience.
This notion of a consumer-first view needs to be at the heart of everything we do. If you’re an application designer, it doesn’t bode well if you consumer hate using your application. Increasingly, more and more choices for getting things done are freely available on the Internet, and there’s no shortage of business workers that are leveraging these tools, most likely under the radar. If you want your users to use your systems, the best path is make it a pleasant experience for them.
If you’re an enterprise architect, you need to ask who the consumers of your deliverable are? If you create a reference architecture that is only of interest to your fellow enterprise architects, it’s not going to help the organization. If anything, it’s going to create tension between the architecture staff and the developers. Start with the consumer first, and provide material for what they need. A reference architecture should be used by the people coming up with a solution architecture for projects. If your reference architecture is not consumable by that audience, they’ll simply go off and do their own thing.
If you are developing a service, you need to put your effort into making sure it can be easily consumed if you want to achieve broad consumption. It is still more likely today that a project will build both service consumer and service provider. As a result, the likelihood is that the service will only be easily consumable by that first consumer, just as that user interface I mentioned earlier was only easily consumed by the developer that wrote it.
How do we avoid this? Simple: know your consumer. Spend some time on understanding your consumer first, rather than focusing all of your attention on knowing your service. Ultimately, your consumers define what the “right” service is, not you. You can look at any type of product on the market today, and you’ll see that the majority of products that are successful are the ones that are truly consumer friendly. Yes, there are successful products that are able to force their will on consumers due to market share that are not considered consumer friendly, but I’d venture a guess that these do not constitute the majority of successful products.
My advice to my readers is to always ask the question, “who needs to use this, and how can I make it easy for them?” There are many areas of IT that may not be directly involved with project activities. If you don’t make that work relevant to project activities, it will continue to sit off on an island. If you’re in a situation where you’re seen as an expert in some space, like semantic technologies, and the model for using those technologies on project is to have yourself personally involved with those projects, that doesn’t scale. Your efforts will not be successful. Instead, focus on how to make the technology relevant to the problems that your consumers need to solve, and do it in a way that your consumers want to use it, because it makes their life easier.
By now, everyone who’s interested has probably watched Apple’s 25 minute video on the features of the iPhone. I did, and it was the clincher to convince me to find a line at an AT&T store somewhere near my house on Friday (I suspect the lines at the Apple Store at West County Mall in St. Louis will be far too long for my taste).
I previously had posted some thoughts on Apple’s announcement that applications for the iPhone can be written using Web 2.0 technologies (except Flash, for now). Patty Seybold contributed to the conversation with some comments and on her blog. After watching the video, I wanted to continue the conversation.
Besides games, which really are using the device for an entirely new purpose, the only area where I see a need for third party applications is in the “Internet Communicator” domain. Clearly, no one needs to write anything to help it play music, watch videos, or make telephone calls, Apple’s taken care of that. For me, Internet communication comes down to four things: a web browser, an email client, an RSS reader, and Instant Messaging. Apple’s taken care of two of them, or possibly three, presuming Safari on the iPhone has the same RSS capabilities as the desktop version. They don’t have Instant Messaging, which is a glaring flaw as far as I’m concerned. While your average teenager may be more concerned about SMS, I tend to rely on IM systems. Many enterprises don’t allow outside IM communication from corporate machines, so that means using a phone for it. I can’t find any good reason why Apple’s wouldn’t have included it. If they felt the primary use was phone-to-phone communication, then why did they not include MMS and instead make you email pictures? Anyway, I digress.
Let’s look at the RSS space. Clearly, there are web-based RSS readers. Safari itself is one. Google Reader is extremely popular. I prefer a standalone reader and use NetNewsWire, but even it syncs with NewsGator to allow for web based access if I so desire. So, Apple’s strategy seems to make sense at this point. The biggest challenge, however, is going to be the diminutive screen. There’s a whole crop of web applications beginning to appear (you can keep up with them via the iPhone Application List) which are effectively web sites that are designed to fit within the iPhone screen. While this can all be managed through bookmarks, I’d much rather have them have an icon on the home page of the device. After all, it’s the strategy Apple itself has chosen.
The YouTube, Google Maps, and Weather applications are clearly examples of web-based applications that were tailored for the diminutive screen, simply because the existing web-based interface was probably designed with at least a 640×400 screen in mind, if not more. This is an implicit recognition that zooming and dragging within full size web sites simply won’t cut it from a usability standpoint. It would be interesting to find out just how much of those applications are web-based, and how much of the presentation is actually generated on the phone, versus on some server.
I would fully expect Apple to issue an update to the iPhone to allow it to have a Home Screen Manager, just like the Dashboard Manager in OS X. While the Dashboard provides APIs for saving preferences locally, I don’t even think this is an absolute requirement for the iPhone, which should eliminate any security concerns. The only thing stored is a bookmark. As long as all of these applications rely on centralized cookie management, there’s no reason why all preferences can’t be stored on the server side. The real question is whether developers will produce the iPhone specific interfaces. I think they will, and I think it’s great that Apple isn’t requiring them to use anything proprietary in doing so. As other smartphones add full web browsers to their mix, these sites that were designed for iPhone will probably be usable, albeit not perfect given subtle differences in screen size, without modification.
Given this, is the lack of a distinct API a big deal or not? My opinion is that it isn’t. Why? The biggest reason is that the iPhone is not a general purpose computer. It’s three things according to Steve: a phone, an iPod, and an Internet Communicator. On the phone side, it already provides everything needed for making calls, so no big deal there. On the iPod side, same thing. On the Internet Communicator side is where the debate comes into play. Or does it? Being an Internet Communicator these days is typically concerned with being a Web Communicator, and since the iPhone has a real web browser, not WAP, doesn’t it provide everything we need (except Flash support)? I’ve previously posted that I think we’ll be seeing more and more lightweight widgets with rich UIs available via web technologies. It would seem that these types of solutions are very well suited for a mobile device like the iPhone. I’m not one of those people who needs Microsoft Word or Excel on a small handheld device. There are developers who may be annoyed that they won’t have APIs for direct access to the Multi-touch interface, but I would argue that’s Apple’s problem, not theirs. How many applications are being written that need a new keyboard or mouse? There’s only one category that I can think of, which should be the only area upset with the announcement: games.
For as long as I can remember, there have been continued evolutions in game controllers. The original joystick of the Atari 2600 is not still in use on today’s Playstation or Wii. So, it’s very conceivable that game developers could find some cool way to leverage the multi-touch interface. Secondly, the size and form factor of the iPhone is well-suited for gaming. Apple even knows this, as they opened up the iPod for games some time ago. Let’s remember, however, that Apple does not allow anyone to produce games for the iPod. I would guess that this is necessary to ensure the security of the iPod is maintained. While an iPod can’t go calling someone, would you be happy if a game wiped out your contacts, songs, or videos? It’s entirely possible that it could also be used to transfer a worm/virus via iTunes synchronization. So, to maintain the “Apple experience” it has to be a tightly controlled environment. With the connectivity of the iPhone, this is even more important. While the same challenges face other mobile phone providers, none of them rely on experience to the extent that Apple does. It’s part of their corporate image, and it’s not something they’re going to bend on. Given that Apple has a partner ecosystem for iPod games, I’m sure the same thing will happen for iPhone games. Given that, I think Apple’s done exactly the right thing. The only thing they need to provide is the iPhone Dashboard. I don’t want a Safari bookmark for every iPhone Web 2.0 app, I want the phone to manage it just as Apple’s Dashboard does with its widgets today. Who knows… perhaps that’s the mysterious 12th icon. I hope to find out in two weeks!
Update: I forgot to discuss the whole Safari on Windows thing. The discussion from Dan Farber at ZDNet made me think of it. My opinion is that it must be a developer play. If the route to the iPhone is through the creation of Dashboard-like widgets, and the underlying CSS engine is Safari, developers have to have a tool for testing it. Starting with the Mac, the first tool for this was Safari. It was only later that Apple came out with Dashcode. I’m guessing that it was probably a far easier path to take Safari to Windows than it would have been to take Dashcode.
Update #2: There’s another discussion on this from Patty Seybold at her Outside Innovation blog.