28 April 2010

Music by a stream with clouds

Consumers have an ample choice of online music services right now. The market is fragmented; each provides a slightly different set of offerings and outlays. Napster is the most established. A decade ago, it was innovative, subversive and popular. It then went dark for a couple years (while presumably it went through legal dialogue) and emerged as a competitive and commercial music rental/purchase service. It had cool features such as an embedded monthly music “magazine”, Windows XP Media Centre integration and a virtual monopoly for around five years in offering unlimited music rental for around $10/month. It should have been a second coming but an apparent inability to read the market caused it to slide from public consciousness; the magazine stopped after a few months, marketing was outmoded, DRM implementation was unwieldy, Media Centre integration disappeared for both Vista and Windows 7, barely anything was free and mobile, web-client and social networking integration were too little too late.

This opened up the market as smaller players gained footholds. Some concentrated on “radio” e.g. Pandora and Slacker. Several went for the rental market e.g. Spotify and Rhapsody. Others stayed with the download model e.g. iTunes and Amazon MP3. Across all of these (and others) there are varying combinations of payment/access methods/DRM/integration etc.

It is clear that streaming and downloading will both remain as different media access methods. Apple hardware owners will always use iTunes (and latterly Lala [US only at moment]) and the remainder of the download music service market looks to be being mopped up well by Amazon MP3.

The first service that can make a sustainable business out of free (and DRM-free) global audio streaming access in a browser (not just US like Pandora and Slacker or Europe like Spotify [plus Spotify does not currently provide web access]) with service continuity and payment options that include - Media Centre integration, mobile and offline support will have a relatively clear-run for the next five years or so. Aside from essential playlist functionality, adverts (or the lack of them), HD/loss-less quality, social networking features, upload capabilities and music discovery/search features are not mainstream deciding factors. Online music is mostly commoditized in terms of need. Consumers want a service that works, is accessible anywhere and is (at an entry level) free. Of available services, Spotify appears to have the best service continuity - being the fastest to start tracks (faster even than Windows Media Player) and never dropping out.

Of those available - the closest to this ideal is Grooveshark. Although to ensure its future, it still needs to improve service continuity, produce a Media Centre client and ideally - deliver the mobile experience without requiring an app (and a bit faster). Also, it is worth noting that the perceived legality of Grooveshark is topical; dissuading some from investing time in it. It simply seems too good to be true that right now; anyone - anywhere, on any PC or console (including PS3) browser can play pretty much anything (without audio adverts) - all day, without even creating an account. Further, if you do elect to create an account (requiring the barest minimum of details), the site will help you send links to people to share tracks or embed a play button in your blog (all without them creating an account). You can even post any streaming track to Facebook. Why isn't everyone using it? It’s great now. It will be fantastic if they can make their business model work long-term.

Given Google’s general cloud/consumer centric approach and the fact that they have recently released a video streaming solution, it appears incongruous that they do not possess an audio streaming solution. Yes - they came in late last year with search augmentation where samples are streamed and then directed to purchases from partners but Google does not own these organisations which means they will not be well integrated with the rest of the range (tags, social, semantic web etc.). They also suffer from comparison with the ideal above.

People can go to a friend’s house/cyber-cafe, use their computer to watch a film through YouTube (US only at moment) then access all of their pictures and documents through Picasa and Google Docs respectively; but they cannot play their music (using a Google product) while they are doing it? Shouldn’t Google buy Grooveshark and mix it up with everyone else? 

25 April 2010

Seraphis to Spreadsheets

Offshore development is a trade-off of cost and skills against communication. It will always be this way until, of course, demand exceeds supply and the only reason organisations consider offshore is for premium skills. Communication will likely always be an issue; as it is when broaching any two cultures, time-zones and work ethics. It is however unhelpful to blanket term – communication.

The issue with offshore communication is communicating project goals (and also progress against them as this is the only real way to confirm understanding). Operational (day-to-day/immediate communication) is generally OK (and much of the reason why non-French speakers can order food in Paris and be understood). Even complex operational communication can be understood offshore. The problem is in appreciating (and buying into) the end-game or goal and that means keeping it simple.

Ancient Greeks understood this when they simply bolted on Greek Gods to Egyptian religion (creating The Cult of Seraphis) as a goal-bridge to help them manage Egypt.

The simplest method of maintaining project tasks is to place them into a single spreadsheet (easy comparison between developers) and to establish a daily process for recording developer feedback e.g. proportion of tasks they have worked on the previous day/changes to effort estimates based on greater exposure. We will certainly want a way of “parking” tasks for a while (if one of their pre-requisites is not ready). Next, we will want a system generated “estimate” for when the task will be completed (based upon daily progress). This will be necessarily different to both the original estimate and the ongoing developer estimate. Finally, we want web-enablement. That is it; the minimum required to capture/communicate developer progress against common goals/estimate when code will be complete.

The process of being code-complete further refines the goal (coding is usually not the bulk of a project). Developers can intersect actual communication (bugs, “please make it do this...” etc.) with code to obtain an accurate representation of what is really required. Further communications – testing, QA/UAT, environment support etc. is easier.

A reusable example of the spreadsheet we have just described is here. Cell comments aid interpretation. Just apportion 1.0 of a day over the tasks for each developer and do it every day. Simple spreadsheets like this have genuinely helped manage complex offshore development projects. Project servers, operational BI, timesheet integration etc. add complexity and detract from the “what/who/when” simplicity required for communicating goals.     

19 April 2010

MOSS Internet Site - Security

Building a corporate Internet site using MOSS has real security implications. Access from the Internet (for the users) and from within the organisation (Intranet) must be considered in terms of both authentication and authorisation. Although a sound solution, MOSS is not well documented here and an implementation typically needs to be designed. Following is a suggested solution template for organisations to use when attempting this. It should resonate with the majority of MSFT-shop organisations moving to a MOSS Internet site solution.

The solution supports two basic user categories:

1) Internal users. Employees accessing via the Intranet. Typically content authors - in charge of maintaining solution content. It is recognized that a requirement for employees to maintain content via the Internet (from home?) may well exist. In this situation, they should connect to their Intranet using whatever connectivity solution is already in-situ (VPN?). Although it is certainly possible to allow employees to manage content directly over the Internet, this creates additional complexity, cost and security risk for perhaps the sole benefit of; being able to manage content from an “unofficial” desktop (without the connectivity software) e.g. in a cybercafé.
2) External users. Public users accessing via the Internet. Able to access content only. May be further categorized as either - Anonymous user (Read only/Able to access all public content) or Registered user (Read only/Able to access special content in addition to public content).

The solution supports two-way authentication; one to support internal users (NTLM) and the other to support external users via Forms Based Authentication (FBA). Internal users are authenticated using the SSO (Single Sign-On) feature provided by the LDAP directory services. An internal SSO process synchronises the Intranet AD with a De-Militarized Zone (DMZ) LDAP. In practice, this will mean that an internal user can log onto the Intranet and be pass-through authenticated to the DMZ i.e. they will not have to log in twice.

Synchronisation between the Intranet AD and DMZ LDAP is one-way. This means that the creation of new internal users will have to take place on the AD and not directly in production. Assuming your organisation does not need to use profile import for LDAP users (and also does not provide custom profile properties for its internal users), it will only need to cater for authentication of LDAP users. This is perhaps a minor risk, since it will only be possible to test this once deployed in Production.


Similar to traditional NTLM authentication mechanisms, the FBA application requires a centralized store for external user credentials (in this case - SQL database). Internal user credentials are stored in AD (on the Intranet) and in LDAP (within the DMZ). An internal SSO process synchronizes them. No authentication is required for anonymous external users as you would expect.

User information stored in the SQL database is created only by a single registration process. A registration page defines the user information and the related validation. Using FBA, users will remain authenticated until the user closes their browser or until authentication timeout occurs.
 
Moving onto authorisation; authorization policies are required for granting content access rights for differing user categories. As stated above two different “extended” web applications are defined for managing both internal and external users. Authorization policies are applied to those distinct “extended” applications addressing the user categories’ tasks.

In the “extended” web application defined for the NTLM authentication (used by internal users) two different user groups are defined in the site settings permission section - Site Administrator (users with full control access rights) and Content Authors (users with design control access rights). No Anonymous access is supported in this “extended” web applications instance. In the “extended” web application defined for the FBA authentication (used by external users) all users are defined as users with read access rights. Anonymous access is supported for a subset of content as appropriate.

A final note on domains. The AD within the DMZ is for service accounts only e.g. SQL or MOSS services. It is not possible to trust the DMZ AD and the organisation AD proper. The DMZ is also unconnected to any other organisational system. MOSS based content deployment into the DMZ is unsupported. These measures should address internal security concerns. There are plenty of other security hoops to jump through when creating a MOSS Internet site but the above should get you started.

18 April 2010

Playing at Art

With the flame-wars of whether video games are really art or not; over for several years (hopefully), we can perhaps move-on and look at methods to inject more artistic merit into videogames.

For those still unconvinced of its art status – the fact that videogames turnover as much money as film (appealing to your commercial side), an idle contemplation that society considers ceramics as art (appealing to your political side) and a casual play of Half Life, Psychonauts, Silent Hill, Bioshock, Grim Fandango, Myst, Okami, Impossible Creatures, Electroplankton, Heavy Rain and/or their sequels (appealing to your –well - artistic side) may cause you to reconsider.

Is art in video games desirable though? Well aside from the altruistic vision of it helping shift millions of young, single men out of darkened bedrooms and engaging in senses, emotions and the range of human experience (maybe even with girls); there is a commercial imperative. Many popular video games are simply too task-orientated (even Stakovian) and realistic to have much of a chance to hold artistic integrity e.g. Call of Duty, Splinter Cell, Total War series etc. Realism is still impressive in a video game but barely. In less than ten years – photo-realism will surely make real scenes/characters broadly indistinguishable from gaming ones. There will need to be new qualifiers of what makes a game interesting and different to others (in addition to playability of course).

Old games were weirder and more abstract due to technical necessity but the whole market was also less commercial. Individuals could straddle both coding and creative chasms and their vision alone (much akin to a film director) could carry a game along most of the development life cycle; creating crazy, organic, beautiful, engaging worlds that stay with you. As a result, old games were more – artistic. Indie gaming still largely works like this but the big names will need to blend production values with indie approaches to compete; that or at least - chillax and get their freak on.

If an indie gamer can produce a free online video game that has a goal, consistent art direction, makes a clear existential point and puts a smile on your face - in less than a week, what can a Duke Nukem Forever–esque development life-cycle and budget do?

Here are some practicable suggestions for making mainstream video games more artistic right now:

1) Reuse existing world concepts. There is very little truly new in art. Almost every piece of art could be argued to be (at least in part) a rehash of a previous piece. This is fine. One of the key reasons why people visit art galleries is because they also like history; they want to associate the lineage of ideas over time. They want to learn something new as well as simply see something new. Homage is rife in cinema and is completely relevant. This relevancy, together with fusion, synergy, consistency and challenging people with the vernacular of the time is more important than a (probably futile) attempt to come-up with something that no-one in history has ever considered before. Portal looks to be influenced by 2001, the indie game above looks to be influenced by French New Wave (which in turn was influenced by Film Noir); there are other examples. Instead of another dystopian sci-fi setting why not (re)create ancient Alexandria in 3D; including the Pharos Lighthouse and library then overlay it with modern graffiti/technology to create a hybrid/alternate world that – although new will also appeal to historians/art-lovers and gamers alike. There are plenty of street maps for ancient Alexandria on the web and no-one has ever created an FPS virtual world of it before. The whole thing could be rendered in a Banksy/Frank Auerbach style (linked through the natural monochrome ethic of the artists). There are a host of mathematical/astronomical and philosophical precedents in that location to hook an emergent storyline into. It would be new. It would be art.


2) Be creative around the medium. There is much more scope in terms of the medium for creativity than there is in building your gaming world. This is because a good deal of the infrastructure surrounding gaming hardware e.g. webcams, email, broadband, downloadable content, mobile, 3D, touch-screen, GPS, micro-payment, IM, photo-realistic graphics, accelerometers, tweets has only been in widespread use for a dozen or so years. Some games have attempted to capitalize here e.g. Little Big Planet uses the PS3 Eye camera for level design and Grand Turismo will purportedly use it for head-tracking. There was a point-and-click game from a few years back that sent actual emails from game characters to allow the plot to enfold in real-time. These games are a fraction of the market though. Nintendo have led the way here for the last few years but there is so much more creative scope; there is barely any integration between Wii and DS consoles for example.
3) Decorate the corridor at least. There is a staple in FPS video games levels of - the corridor. There is no real issue is using this device but surely they were born to show-off graffiti, framed prints/paintings or imaginative entrances/exits etc? Graffiti in particular is easy to incorporate. There is a certain motivation associated with it that perhaps many designers will not have also they are maybe weary of reproducing existing graffiti due to perceived copyright reasons but – take it. If later some street artist complains that you are infringing their copyright – pay them. The designer will always have the fact that graffiti in public spaces is illegal on their side (unless the Government has set-up a “graffiti wall”). Anyway, it is a better way of commercializing and improving graffiti quality (Biasquiat-izing it!) than the naive suggestion of sticking QR codes (linking to a site to allow prints to be purchased) onto graffiti.
4) Increase cooperative options. With the notable exception of the impressive Left4Dead series, cooperative games have declined in popularity. Gauntlet was once the most popular game in the arcades. Games designers have chiefly concerned themselves with death matches and single player experiences for the last twenty years. Yes - MMORPGs and social gaming e.g. WOW/Everquest/Mafia Wars and others have become popular over the same time but they are very - trading, chatting, building and/or fighting focused. Also the esteem these games afford Goblins and suchlike is arguably derivative and therefore not creative/art. It is tough to find cooperative PS3, Xbox or PC games. Cooperation, in theory, creates participation, unscripted interaction and a more open/fun approach. Removing (or sharing) competitive imperative should allow more focus on the environment and contemplating why we are there; mainstream artistic drivers.
5) Kill the main character. The concept of video game character was invented effectively as a vehicle to get around technical limitations e.g. “you are a bat trying to stop a ball (due to monochrome/block video output) in Pong”, “you are a Brooklyn-based plumber (due to sprite colour/pixel restrictions) in Super Mario Bros”, “you are a space soldier (due to all the corridors we plan to make you run around) in Doom”. Take this away and you remove the back story (an element of the art component certainly) but gaming back stories are invariably dull, childish or an afterthought anyway. You have the best back story – your own life to date. Video games that play off this, allowing you as much free-range as possible will create more possibilities for expression and that means more art.
6) Design for glitches. There is a sizeable sub-set of the video gaming community that seeks out glitches (Google “video game glitches” and get 11M+ hits). Glitches are well publicized and repeated when they are found. Ones that create hidden or bonus worlds (perhaps because you have been able to jump beyond a wall that the designer did not intend) are particularly popular. The downsides - screen freezes, save game corruption, lengthy waits should be minimized by creating a wrapper that the game runs in and (in real-time) elegantly returns the user to the game world. Glitches (basically software bugs) are always going to happen but unlike commercial software development, they create something new, something imperfect, something challenging/exciting, something able to be contemplated, explored and potentially even resolved; something very much like – art. The glitch as art is not limited to video games and is even becoming a form in its own right.
7) Allow objects to be uploaded (or at least changed) and persisted. This all adds to creating a richer world which others can incrementally build upon (see 1 above). Over time, we will then have something recognisably – newish affecting the game itself (new adventures, cultural touch-points or just new graphics). Farmville allows for simple pointillist graphics to be persisted by planting crops and Second Life supports complex scripting to allow anything to be built as a 3D model/photoshopped and uploaded. Some median between the two of these is probably appropriate for mainstream video games.

12 April 2010

Universal enterprise UX – Part 7 (Browsing, finding & copying)

Unless we implement a duplicate create button in the Type/Filter web-part, we would have to first select an existing Enquiry (or any other business data) in order to create a new one. Each form will have its own individual buttons but they should all contain the basic functionality indicated above ideally with the buttons placed in the same positions on the form.

When we see information on a form, it is likely that the user will want to browse (or drill-down) into the detail of that particular business entity e.g. an Enquiry. There are several options available to us to handle browse.

1) Consolidated View button. We would require this button in order to see more detail surrounding a particular piece of business data in a form. For example, if a particular note were displayed in a list-box of all notes for a particular Enquiry, clicking “View” would open up the Wizard Pane to give more details regarding that particular note without losing context with the Enquiry proper.

2) Multiple View buttons. This is as per the above except that each control capable of being drilled-down into has its own View button. The user would only be able to examine in detail the particulars of one business datum at a time on a form, so it makes sense to have just one (consolidated) view button rather than cluttering-up the form with one View button per-datum. The disadvantage of both of these approaches are that they disrupt standard hyperlink flow, for example; if the user sees an Order Part in a list of linked Order Parts, he has to select it and then select “View” in order to see it in detail. The advantage is that the linking process is simplified as the user doesn’t need to differentiate between the links i.e. the row containing the Order Part and the control i.e. the list-box containing all Order Parts.

3) Separate the hyperlink from the control. This would be required in order to both allow regular hyperlink operation but differentiate between this and the user merely wanting to change the value. If the user wanted to change the value in a control he would click the area of the control not covered by the hyperlink. If he wanted to instead drill-down into the value in the control, he would click the hyperlink instead. The main disadvantage is that the user needs to be aware of the difference and this is not typical UX behaviour. It also means that the user can drill-down into several values without losing focus upon a particular control. The main advantage is that all hyperlinks work in the regular manner.

Option 3 (Separate the hyperlink from the control) option is selected for our model; principally because this way means that we do not have to compromise on standard hyperlink operation.

In the case of the Enquiry example described above, the Wizard Pane did not have to be used to provide the detail view; although in the case of viewing the details of notes, which do not have a lot of information around them; this is recommended. An alternative approach would be to (upon View click) re-purpose the entire Browse/Next web-part to display the detailed information and provide a “Back” button towards the bottom of the form. In this way, much more screen real-estate can be used to display the detailed information; although this is at the expense of context i.e. Enquiry details will be temporarily hidden to accommodate the new details. This type of behaviour would be recommended for viewing the details of the principal business objects e.g. Enquiry, Task, Order, Order Part only.

The model uses existing Find and Browse UX components in order to find entities in forms, for example; finding a particular Contact from the Asset Filter (a Type/Filter UI component) to add to the Task Details form. This allows us to: Reduce development time; as additional find and browse capability is not required; Reduce testing efforts because the principal controls need only be tested once; Reduce training efforts as the user knows to always go to the Asset Browser to find assets (Customers, Classifications, Products and whatever else is deemed an asset in future) and finally - obtain data entry speed benefits as the user’s flow of operation is consistent. It also lends itself more readily to macro-automation.

Several options around implementing find functionality are available:

1) Accept loss of Type/Filter status. When highlighting a control on the form i.e. Contact details. We just go to the Asset Filter and choose the correct Contact Id to add to the form, clicking “Link” to add it to the requisite area on the form highlighted. Benefit: Don’t need a new button. Benefit: Can use existing one-way line of web-part communication i.e. from Type/Filter to Display/Update web-part. Drawback: Lose focus upon whatever was previously highlighted in the Asset Filter (Type/Filter) e.g. Particular Products of interest may have been in the Asset Filter; these may have had lengthy filter criteria which, if the user wants the list back, will need to be re-entered.
2) Add “Find” button to form. This would temporarily re-purpose the Asset Filter in order to find the Contact that is needed to add to the form. Clicking “Link” would in addition to putting the selected Contact or Contacts in the form, place the Asset Browser back into the state it was previously. Benefit: Don’t lose focus upon whatever was previously highlighted in the Asset Filter. Benefit: The “Type” and “Context” fields in the Asset Filter could be temporarily propagated to show correct context e.g. Type of “Customer” and Context of “Contact”. Drawback: Means that we have to build an additional line of web-part communication i.e. From Display/Update web-part to Type/Filter. This could be challenging if the Display/Update web-part is rendered in an iFrame.
3) Add “Save” button to Type/Filter. This would save the state of the Asset Filter (Type/Filter) before operating as for 1 above. Once at state is saved (only one state allowed to be saved), the button would turn into a “Restore” button, allowing the original state to be returned i.e. after we have located and added the correct Customer(s). In this way, it would operate much like the memory function on a regular calculator. Benefit: Adheres to the existing channels of web-part communication i.e. from Type/Filter to Display/Update web-part. No new channels required/means no additional testing required. Benefit: Provides additional functionality as a consequence of finding things e.g. a user could find some interesting Products and save them before looking at some Customers and then toggling between the two without even involving the Display/Update web-part and form filling. Drawback: It is an additional control to place upon a Type/Filter and so may add to general application complexity.

Option 3 (Add “Save” button to Type/Filter) is selected for our model due to this solution giving the user more power without adding behind-the-scenes complexity and risk.

The Type/Filter should also be used to copy things from one UX component to another. In this way, it is an extension of the method used to find things. For example, if a requirement exists to copy one or more Order Parts from one Order to another, the user would select the Order that he/she wants to copy to (bringing it up in the Display/Update web-part) and then switch to the To-Do List, selecting type of “Order” with a context of “Order Part”, clicking “Filter” and then selecting the checkboxes of the existing Order Parts to copy to the Order under focus and in the Display/Update web-part. It may be seen that the user can only copy things that are selectable in a linked Type/Filter through the standard filter options. If business elements outside of this selection are required, filter options should be changed to encompass the required selection rather than implementing some new copy/paste option. Filter options are easily changed as they are defined through exposed web-part properties.

Universal enterprise UX – Part 6 (Form Development)

Form development is the bulk of programming activity using our model. The Type/Filter and Browse/Next web-parts should (in theory) be able to be configured (XML/web-part properties) rather than developed. Forms can be developed using whatever technology the organisation are familiar with e.g. Web/InfoPath/X.

The Browse/Next web-part is the wrapper that all forms reside within (when they are on screen). Many forms will be able to be rendered within it in their entirety. Some forms will require contextual information to be retained in the form for reference, while the user is engaged in an “offshoot” task; for example a form focused upon generating a new user might want to retain the core user details e.g. Name, ID on the Browse/Next, while also taking the user through a process to add that user to selected user groups. This sort of behaviour should make use of the Wizard Pane concept. This supplementary web-part permanently exists on the screen wherever a Display/Update web-part is placed and (like personalisation behaviour) expands whenever required.

Its use is not limited to wizard-like behaviour and it may be used to render discrete pieces of information, tables or other complex controls that require the original context (in the Browse/Next web-part) to be displayed at the same time that the user is interacting with the new process. Another example of use of the Wizard Pane would be to show details of Notes, where upon selecting the note in the list box (perhaps attached to an Order); the details of that note e.g. Name and Description, Raise Date, Raiser and Type are detailed in the Wizard Pane.

Form developers should be fully aware of the Wizard Pane and make appropriate use of it where necessary. Judicious use of the Wizard Pane could dramatically improve the UX. A good form developer will make appropriate use of it wherever possible.

When the Wizard Pane is expanded on a page, it will “compete” for space in the same way as for the other web-parts. Within a form, collapsible sections should be used wherever a logical grouping of form details exists, for example; “Personal details”. This allows the user to personalize the form experience by collapsing sections he is not interested in at a particular time. The combination of both the Wizard pane and collapsible sections should mean that use of tabbed dialogues and in particular pop-up dialogues should not be necessary within any application built using this UX model.

Universal enterprise UX – Part 5 (Personalisation)

All commercial portal platforms allow for a level of personalisation; for example Windows SharePoint Services (WSS) allows the user to change their personal view of a WSS site e.g. resizes of web-parts, addition of new part-parts, minimising of web-parts.

Our model needs to extend this however. For example; when a web-part is minimised, the remaining web-parts should automatically re-size to consume as much of the screen real-estate as possible. This auto-expansion activity must only be constrained by underlying portal platform e.g. web-part zones for the page in question using WSS/MOSS. Unless all web-parts are specifically minimised, whatever web-parts are not minimised on the page, they will always consume exactly the same screen real-estate.

Examples of auto-expansion behaviour follow. These are not exhaustive and additional combinations may be allowed for example a page where the Task List is auto-expanded to consume the whole page (similarly to the Asset Filter whole page expansion below) would be possible.
Darkt-gray boxes above indicate that a web-part has been minimized by the user. Light-gray boxes indicate that a web-part has automatically re-sized itself to consume the available space within its web-part zone. Note that in the case of the page described in the top-right above, the Asset Browser has not horizontally expanded to fill the page, as this would exceed its web-part zone. Note also the lines between the pages as they indicate how a page may be successively personalized to the user’s taste.

Universal enterprise UX – Part 4 (Bringing it to life)

Fleshing-out our order processing scenario a little; Orders consists of Order parts (Line items) and these are for various Products. Products grouped into Categories. Enquiries can be progressed into Orders and Tasks are tracked by the application for various reasons e.g. Tracking cold calls, customer complaints etc. All key entities are allocated a Classification e.g. Geography. Customers are front-ended using the order processing system but are likely managed using something else. Similarly Tasks are front-ended but this will likely be managed by some workflow solution. That is it - a standard order processing solution.

Several instances call for a non-linear browsing approach, for example, the selection of Categories and Products where related items (not specifically searched upon by the user) may be seen and the selection may be seen in context. In many instances, the user will also want to search upon these objects (Categories and Products) directly i.e. without browsing.


The arrows show UX contextual flow only. For example, user interaction in the Browse/Next web-part will not automatically affect the Order browser however; the user can select a business operation such as “Add to order” to insert a Destination (or any other allowed object) into the Order browser (and the corresponding Order [Part] highlighted).

Each Type/Filter type of web-part will operate as a three-fold UX process. The user will choose the type of data to work with (Step 1) then apply a series of general filters to that selection (Step 2) and then browse the result set (list box) by use of the vertical scroll-bars and column sort functionality (Step 3).

Initial rendering of the Type/Filter web-part (Step 1) should either default to the filter view (Step 2) or the list view (Step 3). As such this web-part has just two modes – filter mode (Step 2) or list mode (Step 3). Three steps are shown above in order to articulate the UX process involved i.e. the Type has to be chosen first.

In this (modal) way, the model can affect a search by the following means – filtering, sorting and then browsing (from switching from the Type/Filter web-part to the connected Browse/Next web-part). For both UX and performance reasons, a more unstructured search function where, for example, all matches for a particular string are returned across all tables, columns and rows will not be selected. This type of unstructured search is more appropriate to content searching.

A Task is selected in the Task list and the matching Task is automatically selected in the Browse/Next web-part (Order web-part) below it (with the hierarchy also automatically expanded for browsing). The Next functionality will be grayed-out in this particular case, as there will only be one matching Task amongst the Orders.

We would mainly use the Type/Filter web-part as an entry-point to the browsing (what users generally really want to do). For example, if we know which State we want, we will use the Type/Filter web-part to select a Level of “State” then switch to the list view to find our particular state e.g. California. Once we have selected this, in the same manner as with the Task description above, we would switch to the Browse/Next web-part to drill-down into California to see the regions within that State and the Cities within the Regions (or in fact any hierarchical description we define below California; political/ethnic etc.).

Find functionality to the Type/Filter may be extended by the provision of two Filter/Value pairs, the first of which has previously enumerated values i.e. in a drop-down, the second of which displays different Filters (fields) to the first and allows a “wildcard” search via a text-box.

If the user wished to search for all Marketing Tasks for the New York office created for customer “Smith”, they would enter the following; Type = “Task”, Context = “Marketing”, Level = “”, Filter = “Office”, Value = “New York”, Filter = “Customer”, Value = “Smith”. There is a purposeful and direct parallel with the underlying table structure e.g. Type = Table, Context = Table Type, Level = Table Relationship, Filter = Field, Value = Value of Field.

No additional placement of Filter/Value pairs is possible with the model above. Also note that no Boolean operators are operative between the Filter/Value pairs. Of course more pairs/Boolean operators would extend functionality but would your users really use them?

Universal enterprise UX – Part 3 (Some rules)

For purposes of both user acceptance/reduction of technical complexity, rules regarding the use of web-parts on a page should now be enforced for our model:

1) No more than six web-parts on a page by default. The user may well be able to add more web-parts from a gallery but this should be at his/her discretion. By default, any more than six on a page risks disorientation.
2) Web-part communication only goes one way. Although certainly technically possible to have web-parts communicate both ways, i.e. both to and from each other. This creates unnecessary complexity i.e. users have to think modally also it increases the number of paths through the system which then have to be tested.
3) One web-part should communicate with a maximum of two other web-parts. Although generally desirable to have web-parts communicating with each other, any more than two other web-parts being “automatically” changed when a user selects an item in a third web-part is making the application too linear i.e. it prevents other activity from happening in parallel because UX components have just been re-purposed for the current task under consideration. Allowing for a level of UX multi-tasking is desirable.
4) No more than one Display/Update web-part per page. Any more than this would entail either a significant amount of scrolling or would be confusing to the user as two portions of the screen would then be devoted to displayed information. Which Display/Update web-part should they look at, for example, when the user selects a contextual data item from a Type/Filter?

Another rule is that the look and feel of web-parts and the way that they interact should be consistent. This benefits not only rapid development, in that UX code may be developed once and re-used for varying functions but also significantly reduces testing and training time.

Universal enterprise UX – Part 2 (Core Web-parts)

The first post on this topic generated several requests for a UX design of the concept and some suggestions that it simply would not be implementable. Following posts on this topic will attempt to deliver at least a high level design so this question can be debated. We will use specific terms e.g. web-part but the design will be generic and not tied to any particular portal technology. We will also use the scenario of a standard order processing system to bring it to life but it could just as well be any application.

Each portal page will comprise a maximum of one large area (web-part) for the display and update of data (the Display/Update web-part). This will present data/allow updates (if the user has the requisite authority etc.). A page could be comprised of several other web-parts (without a Display/Update web-part) but it will never have more than one Display/Update web-part as the scope of the page would then be too confusing for the user. Optionally, this web-part will invoke actions; for example, “New customer entry” or “Add contact to enquiry”. As such it may be described as a “big, dumb” area of the page in that it has no filter, selection (of types) or browsing capability. It exists for displaying and updating data – CRUD operations.

The Display/Update web-part must then be controlled (and contextualized) through other web-parts e.g. filtering, selection (of types) and browsing. These functions may be combined in several different ways to create the web-parts described in the earlier post.

Data with a rich-hierarchy such as Products lend themselves to a browse metaphor and data which is mainly sequential such as Tasks lend themselves to a list metaphor. Our model will implement these as such. There are also a couple other ancillary functions; namely Next (for moving through peers in a hierarchy) and Filtering (for selecting what is in the list) functionality respectively.

Now that we have defined the core functions, several options for combining them into web-parts are possible:

The “Next” functionality simply finds the first match and allows the user to “hop” through matches to whatever filters are set in the Type/Filter web-part.



1) Multiple Filter web-parts. Easier for multiple developers to code, for example; they can each be given a web-part describing a “type” of information e.g. Developer X is responsible for creating a web-part to select “Geography” and Developer Y is responsible for building a web-part to identify a “Category” for successive editing. Issues with code-reuse, consistency and the amount of screen real-estate required.
2) Combined Type and Filter web-parts. Here, the same web-part is used to display all types of data, for example; a drop-down will allow the selection of “Classification” as a type and the user would use filter functionality to select whether “Geography” or “Category” would be displayed in the same web-part as a list. The user would then choose one at a time to display or update in the Display/Update web-part. The user will use the filter capability in the controlling Type/Filter web-part to select the first in the list of matches and then use the Next button to move through matches.
3) Separate Type and Filter web-parts. As above, but the type functionality in the drop-down is taken out of the controlling web-part and placed into a dedicated Type web-part. The issue here is that it may be confusing for the user in that they have to know which web-part is actually controlling the Display/Update web-part i.e. Type or Filter.
4) Combined Type/Filter and Browse/Next web-parts. As with Option 2 but the additional Browse web-part functionality is also included. This option has the advantage of being very flexible in terms of user interaction. It suffers however from a compromise around consistency i.e. The Type/Filter/Browse web-part is used for all controlling functionality (even for those areas that do not need it or it could be potentially confusing e.g. Task list) or we selectively use it for areas that may benefit from browsing either by a list or a hierarchy e.g. Classification.

Option 2 is selected for our model as it is a combination of usability and judicious screen utilization and affords a consistent UX. It also does not result in extraneous functionality being deployed e.g. around the Task example described above.

Our model now has three core web-parts; Display/Update, Browse/Next and Type/Filter. Together, these will enable us to build most applications. They are effectively UX classes; used to build the physical web-parts of the application e.g. both a Product browser and an Order browser can be made from (differently configured) Browse/Next web-parts.

08 April 2010

On Microblogging & Microcelebrity

Why Tweet? Many tweeters are motivated by learning. A clinical psychologist suggests it “...stems from a lack of identity...”. Arguably the most widespread theory (other than simply - "its cool") is a narcissistic/observer cycle with celebrities and civilians respectively. It is a viable way for many people to feel like they are “hobnobbing” with celebrities. Fleetingly and tangentially (as their exchange is typically over a specific topic/shared link) they too are famous. The real reason, as with many truths, is likely some combination of these. Whatever the motive, its - popular, free, requires no training and is on-the-surface at least - an effective communications channel. Is it of any organisational value though?

There are a host of organisational microblogging tools. At enterprise scale, Office Communications Server (OCS) 2007 R2 supports persistent group chat (through its acquisition/integration of Parlano’s MindAlign) as does Lotus Sametime. MSFT are additionally in development with Office Talk, a Yammer-esque enterprise microblogging service. This is a research project though and may not see the-light-of-day or be subsumed into MOSS/OCS. The space is well served.

Are organisations really using microblogging tools productively in a business sense though? They are not fundamentally different to persistent group chat; functionality that has been in heavy use in trading (to collaboratively form trading strategies in real-time) and automated application feeds for years (particularly within the Finance and Resources sectors). An immediate, informal, concise and one-to-many communication channel makes perfect sense in these scenarios.

Organisations that already use it will carry-on – happy that what they have been doing for years is now modish. Organisations that are rolling out microblogging now though – because they feel it is expected by their workforce, will cause an injection of productivity or (more likely) there is a single vocal micro blogging champion that treats it as a personal cause will experience challenges. As with many consumer/social tools adopted by the enterprise, the process of obtaining similar enterprise versions (security, bulk updates, internal hosting options, directory integration, archiving and moderation) will lose much of the usability and fun element of the originals. For the employee, having to keep two copies of your Facebook, Twitter, Blogger, Delicious, Foursquare and (in the not very distant future) - Quora and Plancast profiles (and settings and friends) will be both tedious and impracticable. Also, many people enjoy the disposable identity present in consumer social networking; especially the young; where they can try on different pseudonyms and personas. This is directly at odds with organisational goals. Organisations could federate employee personal networks with the work networks but how many are going to attempt this given the reasons for enterprise versions in the first place? Basically, how many business decisions are really going to be influenced by microblogging if you are not a trader to justify the cost?

The most important element for organisations to take from the micro-blogging phenomenon right now is not the tool; it is recognition of the potency of microcelebrity:

1) Build it into business processes. This can be harnessed without even having tools at all. For example, a key strategic requirement for many organisations is to empower employees to create, publish and maintain ad-hoc reports in response to the needs of the moment (versus some lead time – typically in weeks); it is cheaper and employees understand their own data best. Driving this behavioural change requires more than training. Showing the “Top-10” most popular reports, who created them and broadcasting them as web parts/gadgets on home pages affords both a sense of microcelebrity (in terms of the report writer) and positively contributes to the willingness of other employees to create these reports (or hobnob with them).

2) Build it into employee recruitment/retention. Personal branding awareness is on the rise. Employees are able to articulate their ideas, successes and worth using the Internet as a market place. They can operate at an influential global level if they want to despite their actual role not quite matching up. For those that do, they can be frustrated with HR policies that do not recognise this. Micro-celebrity recognition should be supplementary to standard HR/line manager recognition, bottom-up and borne from audited and metricized operational usage. It should be inarguable and fundamentally honest come employee review time. This is due to any peaks/troughs being levelled out by sourcing the data from many people rather than HR, the line manager and occasional others that are bought in to provide their views.

No-one is listening though.

07 April 2010

Handling Targets That Do Not Roll-up

Comparing actual figures against targets are fundamental to any Performance Management (PM) system. PM solutions are typically built upon cubes and correspondingly, cubes are built upon data marts. In a data mart, measurements are generally stored in fact tables at their lowest level e.g. Cost per Organisational Unit (OU) per Day. These measurements are then aggregated along dimension levels e.g. Cost per Area per Week is the sum of the daily site figures for that week and area. This solution works well with actual figures as they are almost always additive. Targets however can sometimes be independent e.g. Target Cost per OU per Week is not necessarily the sum of daily target costs for that week.

There may be legitimate business reasons for doing so e.g. cost is heavily market/EOS driven and needs to be targeted manually or, for whatever reason, targets cannot be decomposed into fully additive components. This means that the PM solution has to either store the data for all levels in the fact tables or not include targets in the data mart at all. This latter option creates its own issues though as many front-end BI/PM tools can only connect to a single data source at a time.

Design alternatives on how to store data for all levels in the fact tables are not well documented in BI/PM design literature. Options here are:

1) Single Fact Table. Storing all aggregates in the same fact table simplifies the data model but has the disadvantages of values being duplicated e.g. the year value is stored in every day record; the fact table will contain a large number of fields and the ETL process is more complex (either using updates or reloading every level for each load).
2) Multiple Measures. To reduce the number of fields in the fact table, the tables could be split along time levels or measure. This option has the advantages of a reduction in the number of measures per metric and no duplication along the Time dimension. Its disadvantages are that there is likely duplication along the OU dimension and there are multiple fact tables.
3) Fact Table per Dimension Level. Defining a table for each of the dimension levels has the advantages of no field duplication and the least number of measures overall. Its disadvantage is again that there are multiple fact tables.

Option 3 is generally recommended as this eliminates data duplication and simplifies the ETL process. It is somewhat unusual as typically within a star schema; a fact table is surrounded by multiple dimension tables. It is however, a practical solution and has been recently implemented for a large resources client.

The fact tables are connected to the dimension tables on differing granularities as shown below:


The CostDay fact table is linked to the Time dimension table through its TimeId Foreign Key (FK). Fact tables with different granularities need to have additional field linking them to the Time dimension at the desired granularity. All attributes within the fact table therefore have to share the same granularity.


This (admittedly complex) model can be later simplified using whatever tools you use to maintain your cubes. SQL Server for example provides Perspectives, Measure Groups and Calculated Members that can be used to hide the complexity of the underlying data model from the user. Perspectives can be used to hide objects e.g. fact tables from the user. Finally Measure Groups can be used to create logical groupings of fact members.

05 April 2010

BI Strategy Planning Tips - Part 2

4) Directly address credit crunch sensibilities. BI endeavours can be expensive and challenging to articulate in terms of benefit. You may need to plan a BI strategy; but right now, projects need to have a critical-mass of internal support and short-term benefits in order to obtain funding and/or avoid being postponed. Key areas are:
a. Cash flow management. Cash is King. Cash flow metrics e.g. Price to Cash Flow/Free Cash Flow enable managers and potential buyers to see basically how much cash an organisation can generate. Data mining can predict cash flow problems e.g. bad debt to allow recovery and/or credit arrangements to be made in a timely (and typically cheaper) manner.
b. Business planning capabilities. While information systems are in place for many organisations to generate representative management information, the level of manual intervention needed to deliver essential reporting can result in unacceptable delay and therefore data latency and inconsistency. This directly impacts the organisations ability to predict and respond to threats to its operations (many - in an adverse climate). Outcomes can be financial penalties, operational inefficiencies and lower than desired customer satisfaction. Building habitual Performance Management (PM) cycles of Monitoring (What happened? /what is happening?), Analysing (Why?) and Planning (What will happen? /what do I want to happen?) - places the focus back on results as well as affording a host of additional benefits.
5) Release something early. Unlike a couple years ago, you likely cannot now go through a six-month analysis and following data cleansing and integration phase. This work does not deliver tangible business benefits in the short-term. Instead, look at getting basic BI capabilities out within weeks. This will allow you to incrementally build upon your successes, gain business/operations experience (through monitoring usage) and build user alliances gradually. How much you can do here depends on your chosen BI platform but building your BI/PM on-top of existing reports (data pre-sourced/cleaned), selectively using in-memory analytics tools (no need for ETL) and SaaS BI; if your immediate needs are relatively modest, all should be seriously considered.
6) Avoid the metadata conundrum. Metadata is undoubtedly important. It is well known to assure adoption; convincing those making decisions (from the system) that they are using the best data available BUT it is a complex problem and intersects other disciplines e.g. data integration, information management and search. Most data objects, whether Word files, Excel files, blog postings, tweets, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook etc. can be expressed relationally i.e. they make at least some sense in a tabular format. They also span the spectrum of metadata complexity. The end-game is to build on all the ideas of ODBC and JDBC to provide the same logical interface to all of them. A DBMS or file system can then treat them all the same logical way, as linked databases and extract the metadata, create the entities and relationships in the same way and use the same syntax to interrogate, create, read, write and update them. Tools/theories are evolving but this is rarely achievable in practice. If you can satisfy regulatory requirements with the bare minimum – do it. Just concentrate on data lineage if you cannot; building as much of a story around the ETL process as possible. Less than 15% of BI users use metadata extensively anyway.