31 March 2010

MOSS project failure reasons

Here is a list of the main reasons for MOSS project failure based upon experience (many reasons will also be applicable to non-MOSS projects). Other (occasional) reasons contributing to failure include inadequate business domain understanding, MOSS deployment design and MSFT licensing appreciation but for the sake of focus, only the main reasons are shown here. Reasons are shown in descending priority order:

1) Scope management. Initial requirements gathering insufficient. Incremental scope additions undocumented and outside change management. Little or no architectural analysis of scope shifts before they are implemented resulting in downstream performance, testing and hosting integration issues. High-Level Design (HLD) used as sole written basis for development. HLD is generally insufficient for developers to interpret and develop against. A solution can be to factor in element of client MOSS training during the early analysis stages. This will reduce incidence of client scope expectation being significantly more advanced than what is possible Out-Of-the-Box (OOB). Basically, clients tend to think MOSS can do more than it can - OOB.
2) Configuration management. Robust problem and change management processes missing; inadequately defined or not maintained. Doubly important in a MOSS environment where configuration, content and code (and also data and process) are entwined. Effect of this is that developers become confused as to what they are supposed to be working on and client loses detail visibility. Senior developers may attempt to establish these processes but will generally be ineffective due to inexperience and their relative capability for enforcement. Multiple versions of both requirement and design documents stored in multiple places and little attempt at sign-off coupled with poor client communication means that entrenched positions are swiftly made. Developers can struggle with the same issue for days. If daily updates are recorded in a problem management system, this will be obvious to the Project Management (PM) function through oversight.
3) Project communication. Poor overall project communication. With a focus on gaining hard MOSS technical skills, softer skills such as communication are often overlooked. Developers typically send conflicting messages to client. No team scrums, little articulation of cause and effect in terms of project plan. No “bottom up” captures or extrapolation. Culture of sharing information not engendered at senior developer level. PM reporting limited; when requested at high level and non-specific.
4) Skill levels. Due to initial shortages of MOSS skills, developers have historically been staffed to projects with the effective remit of learning on the job. This is not now the issue it used to be due to wide-scale MOSS training focussing on .NET Web part Framework. There remain key poorly resourced areas e.g. workflow and InfoPath development and engineering experience in MOSS “lockdown” mode.
5) Administrative duties. Not really focussed on operational logistics e.g. reporting status, ensuring adequate vacation cover, maintaining a service incident log, ensuring network access for resources and providing a view to scheduling on upcoming resource requirements. In a market where MOSS resources are relatively scarce, focus on effective resource management is critical.
6) Engineering resource. Insufficient engineering resource can be applied to MOSS projects. This manifests itself as generally assumed acceptance of the client infrastructure and hosting plans and means the team cannot plan and iterate for performance from design onwards.
7) Client relationship. Diminished client relationship. This can be left by default to be built by developers. Without a consistent face being presented to the client through reporting or solution walkthrough, the possibility for a relationship to grow is diminished and client finds it easy to take an aggressive stance to the engagement when required. This is particularly important where a RAD approach is more applicable e.g. MOSS.

21 March 2010

BI Strategy Planning Tips – Part 1

Planning a BI strategy for your organisation can be challenging; you need decent industry/vendor awareness, an appreciation of organisational data and ideally; a handle on budgeting. In no particular order, here are some practicable tips to get started with yours. More will be forthcoming.

1) Start with your Supply Chain. Reduction of energy use in data centres is a key IT issue. This is generally driven by either a cost saving drive or a Green IT focus (shouldn’t they really be the same thing though?). However, most organizations budget between 2-5% of revenue for IT budgets, yet spend roughly 50% of revenue on all aspects of supply chain management. Basically, there are significantly more savings to be made in the supply chain than in data centres. Large volumes of raw data are generated and stored by each process of the supply chain (plan, source, make, deliver and return) by automated enterprise applications being used at most large, global manufacturers. BI can help determine what information is necessary to drive improvements and efficiencies at each process in the supply chain and turn the raw data into meaningful metrics and KPIs.
2) Forget the BI-Search “evolution”. Just the training costs for commercial BI systems are expensive. Organisations want single (easy and simple) interfaces wherever possible and a search-based interface appears to be the key to engaging the masses. There has been heavy speculation over the last couple years (ongoing) that BI and Search technologies will somehow merge. This approach however only really surfaces existing BI reports for more detailed interrogation e.g. “July Sales Peaks”. A search string is not a rich enough interface to support ad-hoc queries. Think about it? You need either a devoted language e.g. MDX or a rich data visualisation package to traverse dimensional data. A search box will never explore correlation between marketing budget and operating income.
3) Forget “BI for the masses” (for now). BI has been actively used in the enterprise since the early nineties. The expectation of “BI for the masses” (basically - the SMB market) hasn't exactly happened. Why is this? It’s because people want to collaborate and jointly come up with ideas, solutions, figures and approaches. They need this for personal, political and commercial reasons. It will change when enterprise SOA is in place and it might change when there is a change in the way consumer impacts enterprise networking. It will not change in the short-term.

18 March 2010

You can tag. A lot.

(Originally posted 1 May 2009).

MSFT Tag is a Windows Mobile application that uses the device’s camera to recognize a graphical image or tag for the purposes of obtaining information. You could print one out as a sticker and place it on a landmark to get a link to its site or Wikipedia definition for example. Each tag consists of a 5X10 grid of triangles (although these can also be smaller circles; overlaid over a background image to create a custom tag), each of which can be in one of four colours. This allows a tag to hold thirteen bytes of data. Information received is of four types – URL, Telephone number, Vcard and free text.

Telephone numbers, text and URLs (with a bit of compression) could maybe resolve from the device i.e. the tag itself stores the URL and the device either directly calls the telephone number or jumps to the URL/displays the text. QR codes work in this way. This is disproved however when a device is switched to “Airplane Mode” and tag recognition just does not work. This means that MSFT Tag on the device jumps to a proxy server that resolves the reference (GUID) extracted from the tag into the information already uploaded to the MSFT Tag site. The proxy servers must be pretty impressive as the whole thing works very smoothly.

Thirteen bytes is enough to store a number of 2.02E+31; each one referencing a separate piece of information. This is broadly comparable to the number of grains of sand on all of the beaches of the Earth so MSFT must be expecting a decent take-up of this.

Tags are really useful for URL links that contain filters tied to a geography in the link e.g. you are at a bus stop and there is a tag that shows bus schedules; whether they’re late or not – for that particular stop. Vcards, Telephone numbers and free text are cool rather than essential e.g. you might use a tag embedded into a colleagues’ email sign-off to get their details into Outlook but it’s never going to be a killer application.

The eyes of the developer

(Originally posted 22 April 2009).

Offshore development resources are great. They are invariably well educated, diligent and critically in these trying times – effectively priced. Even now though, a key reservation organisations have around using them is - visibility. They want to see them and talk to them; their requirements are so exacting that only by looking into the eyes of the developer can they be understood. Bringing offshore resources onshore for the initial stages of a project (and so that they can return offshore for knowledge) is a proven way of mitigating this concern.

Lead times involved in procuring offshore resources onshore (often three months) can be ineffective for many projects; especially ones founded on a business case of cost reduction/avoidance. This can be expedited to less than a month but generally only if the resources are undertaking “training” and not developing the solution. Developing the solution though is where they will truly learn and become vested in the success of the project. A seemingly attractive alternative for large organisations (and for the consultancies that service them) is to establish an onshore pool of offshore resources to service future onshore projects. Is this a cost effective solution though?

Five offshore resources brought onshore for three months will cost around $88K (accommodation/fly-backs/insurance/travel/visas etc.). Assuming they can be cross-charged (or sold) at say $877/day for 80% of the time they are onshore; this makes $210K in revenue.

This appears good (140% ROI) except for the fact that once this process is started, the resources have to be retained i.e. taken out of (or reserved from) the offshore pool until they arrive. This can easily take three months. Assuming a cost of $146/day/resource, this totals $44K, taking the endeavour down to $79K profit (60% ROI). This may be able to be offset by them doing other (short-term) work in the interim but it certainly should not be counted upon. Once resources are onshore, organisations should also account for increased team lead/managerial support for them (perhaps one day/week across all of them – totalling around $22K in opportunity cost), taking the endeavour down to around $57K profit (38% ROI).

This should also be considered a high risk endeavour due to the fact that resources are being recruited for a pool rather than a specific project (project may not happen) and manifest cultural differences. Organisations are therefore already borderline as to whether this is a good idea financially or not. The best approach has to be simply to keep a close eye on “hot” skills and ensure that offshore pool resources in these areas already have visas.

Floating up to the Spatial Web

(Originally posted 20 April 2009).

Google Street View is an option in Google Earth that shows sequential 2D images down the world’s main streets. This tool provides for a simple 3D effect if you traverse up or down a particular street. Photosynth is MSFT Live Labs software that creates a 3D effect from multiple 2D images of the same scene taken from different angles. You can zoom in/move around the scene if there are enough 2D images of it available. It is more sophisticated than Street View since it is not tied to a particular vector i.e. a street and also because it performs image extrapolation to complete partial views.

Moving through alternate realities like this holds a natural attraction for people; training, POS, medical diagnosis, national security, gaming, film-making, virtual tourism and urban planning would all immediately benefit. Photographs of existing places and objects are increasing at a huge rate through (camera phone) image uploads to social networking sites and the thousands of commercial and Government camera installations throughout the world. There will surely eventually be blanket photograph coverage of pretty much everything everywhere (and once that is achieved - at every time). Geo-tagging will help connect these pictures together.

These technologies basically link photographs of existing scenes up. Your brain mainly adds missing spatial orientation. There is no 3D model of structures behind them unlike say as with Google Sketch-up, AutoCAD or the various First Person Shoot-em-up (FPS) games e.g. Call of Duty etc.

Does there need to be a 3D model behind them? You could ignore surfaces/obstacles, simply walking through them as a disembodied ghost. This might be acceptable for many applications but for gaming it will not. 3D models are time consuming to plot/maintain with changing actual landscapes. The reason why models are created now is primarily for edge detection e.g. you see a wall and because the 3D model has been defined at design-time to identify it as a wall, you are prevented from moving through it. It is much better to do edge detection at run-time using some algorithm to identify the wall as an edge (or barrier), preventing it from being moved through. Gaming will likely be the driver for this new standard. This will be accessible through some cloud-based service and other applications will simply adopt it.

Development of the Internet has shown us that competing infrastructure technologies will co-exist for a while but they will eventually “float-up” to the highest common denominator e.g. the most extensible, cost-effective, egalitarian and “fit-for-purpose” technology. Applications built on selected-out infrastructure will eventually switch to the new standard. It is easy to see a gaming driven cloud-based service, regularly updated with near-real time images as being adopted by all other applications requiring spatial understanding.

In this way, we will see a single alternate reality develop where synergies and opportunities are created by markets colliding (as with actual reality) and the “Spatial Web” described last year by MSFT’s Craig Mundie will become normal (if not “real”!).

UPDATE: Swiss Computer Scientists use Flickr to do this now http://www.openculture.com/2010/12/3d_rome_was_built_in_a_day.html.

Virtual Earth & SSRS Integration

(Originally posted 27 March 2009).

Virtual Earth integration with SSRS can be a challenge due to uncertainty around approach and a general lack of resources on the topic. Developers can use three Virtual Earth APIs: the MapPoint Web Services, the Virtual Earth Map Control and the Virtual Earth Web Services. SSRS cannot currently use the Virtual Earth Map Control since this allows users to make requests via JavaScript to an AJAX map object and SSRS will not allow JavaScript to run (as otherwise it will not be able to export to PDF amongst other things). This would have been preferable since it affords a level of interactivity e.g. zooming-in on tiles. This leaves either the MapPoint Web Services or the Virtual Earth Web Services. Both methods effectively work the same in that points to plot are passed to the web service, rendered on the map server-side and passed back to the browser as an image. MSFT no longer develops MapPoint Web Services (although existing applications built using MapPoint Web Services will remain functional) so if you want Virtual Earth integration with SSRS, your only real choice is Virtual Earth Web Services. Here is a snippet on how to pass coordinates (Longitude/Latitude) to Virtual Earth. This should display a pushpin on the map (The Imagery Service is used).

You will need a Virtual Earth Platform Developer account and this itself will need a Windows Live ID. Once you have this, the SSRS report should call a function in a C# custom assembly, passing in an MDX query string to retrieve co-ordinates. The SSRS report only calls the custom assembly. The output of the function will be a URL for the map image to display on the report. The custom assembly queries the database/cube to retrieve co-ordinates for the points to plot. It should then call Virtual Earth Web Services passing through the co-ordinates to be plotted. Finally, the GetBestMapView method should be used within the assembly to scale the image appropriately e.g. if all points are plotted in the South East of England, the image needs to zoom in to the South East of England and not the whole of the UK.

Text breeds data. Data breeds information.

(Originally posted 7 March 2009).

Received wisdom tells us that unstructured information is 80% of the data in an organisation. Reporting, BI and PM systems are still tied, in the main, to structured information in transactional systems (obtained through ETL, staging, dimensional modelling, what-if modelling and data mining). An opportunity has existed for some time to incorporate unstructured data. The inhibitor is technology. Where product sales figures can be extracted over years and then extrapolated to determine likely sales next period; how does a BI solution use the dozens of sales reports, emails, blogs, unstructured data embedded within database fields, call centre logs, reviews and correspondence describing the product as outmoded, expensive or unsafe? If they do not, they may lose out since this information (the 80%) can affect the decision of how much of the product to produce.

Most organisations currently handle the general need to unlock unstructured data by market sampling through techniques of interviewing, questionnaires and group discussion. They attempt to apply structure to the data by categorizing it e.g. “On a scale of one to ten – how satisfied are you with this product?” They either ignore information already there or manually transpose it; typically by outsourcing. A minority use the only technology that can truly unlock unstructured data within the enterprise right now – text mining. Note that this is different to both Sentiment Analysis (too interpretive right now) and the Semantic Web (too much data integration required right now) .

Many of these organisations however believe text mining simply makes information easier to find. This is a function of currently available products. The principal MSFT text mining capability is in its high-end search platform Fast. Such products use text mining techniques to cluster related unstructured content. It is not enough to loosely link data however, they need to be linked at an entity (ERM) level so they are subject to identical policies of governance, accountability and crucially; the same decision making criteria. It is worth stating the pedestrian; text mining is like data mining (except it’s for text!); establishing relationships between content and linking this information together to produce new content; in this case new data (whereas data mining produces new information).

Consider a scenario where an order management application retrieves all orders for a customer. At the same time, search technologies return all policy documents relating to that customer segment together with scanned correspondence stating that dozens of the orders were returned due to defects. The user now has to perform a series of manual steps; read and understand the policy documents, determine which returned order fields identify policy adherence, check the returned orders against these fields, read and understand the correspondence, make a list of all orders that were returned (perhaps by logging them in a spreadsheet), calculate the client value by subtracting their value from the value displayed in the application and finally modify their behaviour to the customer based upon their value to the organisation. Human error at any step can adversely affect customer value and experience. Much better to build and propagate an ERM on-the-fly (by establishing “Policy” as a new entity with a relationship to “Customer Segment”), grouping the orders as they are displayed while at the same time removing returned orders. I’m not aware of any organisations currently working in this way. This is the technology inhibitor and where text mining needs to go next.

MOSS development tips

(Originally posted 19 February 2009).

Other than the classic (and largely non-technical) project concerns of scope management, configuration management etc. there are key challenges to a MOSS deployment. The process for moving the MOSS solution between environments, for example, is more complex than other .NET projects. This is due to the way MOSS separates configuration, content and code. The process is also heavily dependent upon the environments being used. Implementing changes to Production post go-live are particularly challenging as of course you need to retain content. Generally, the process is routinely underspecified. A document explains the steps involved here.

A general lack of adherence to a MOSS centric methodology other than general MSFT best practices causes developer confusion. The key MSFT document here is – “SharePoint Products and Technologies Customization Policy 2007”. This is intended to be a framework for organisations providing MOSS platforms. Organisations should also use this list as a starting point to verify the quality of solutions that are submitted for deployment. Along with providing a check after a solution has been developed, the code acceptance checklist can make a good training tool. The steps required to plan, design and deploy MOSS projects are defined here. A governance plan is recommended as a means of gaining early support from key stakeholders across organizations. It forces consensus and designates ownership for numerous key deployment considerations for governance and for information architecture. Tracing and error logging support are required reading for MOSS developers.

Reporting is generally required from MOSS applications. The method used to write them is typically C#.NET against the MOSS object model. IT departments are generally aware of SSRS and like its OOB functional abilities e.g. export to PDF/Excel, publish to MOSS, tie to workflow and non-specialist skills required. They find it difficult to understand why MOSS reporting is not so straight-forward.

MSFT wording here has been relaxed around use of SSRS against MOSS and now it may be prudent to copy relevant MOSS tables or database to a reporting database periodically and query directly against that. The decision to do this requires explicit KT and consultation with the IT department e.g. MSFT are more likely to change the DB schema than the object model therefore any reports written against the DB schema could be rendered unusable with future SPs or releases. It will generally be less problematic however than the alternative.

Dumbing down of learning

(Originally posted 18 February 2009).

Learning within the enterprise has always been a fragmented affair. There are dozens of specialised methodologies, packages and even languages devoted to it. It has had its own standards organisations e.g. SCORM and it seems to take several months to deploy a new training programme by which time, the processes/tools or methods it was describing are likely outdated. Actually attending training courses is also tedious; comprised of combinations of self and group study exercises and instructor directed sessions punctuated by role-play sessions or videos to add variety. Occasionally there are also group discussions which are intended to share the group’s experiences but are in practice a mechanism to update course content. E-Learning or Computer based Training (CBT) or whatever you term it, is over-intellectualised.

This is to say nothing of the huge cost (and risk) of fielding several resources (often from the same department) for two to five day training courses every couple of quarters (many times with additional hotel costs and associated travel/carbon footprint costs). This expenditure is routinely tolerated because organisations recognise investment in people is associated with lower employee turnover, which is associated with higher customer satisfaction, which in turn is a driver of profitability. On the employee side, attendance is justified because everyone gives broadly positive feedback at the end of the course because they don't want to rock the boat; they enjoy their periodic change-of-pace after all. Both parties are complicit in the charade.

Web 2.0 tenets of collaboration, user-generated content, crowd-sourcing and social networking have made it into the enterprise on, by comparison, shaky justification; efficiency savings per head (useless when the person does nothing business-like with the time saved), back-door routes where enterprising individuals have brought the tools in unmanaged (helped because many of them are open-source and free) or loose assumptions that Generation Y are fundamentally different to Generation X and expect these tools to be in their workplace or they will leave. These technologies are all about real-time sharing of information, commenting on each other’s content and utilising the inherent wisdom of crowds; in other words, the precise foundation for effective learning.

Rather than just knowing how to do something because they have been previously trained in it (perhaps some years ago), people will instead look to current solutions; contacting someone else who has done it before or failing that - picking through search results. They'll have a low chance of finding the support they need and either learn through their mistakes or give-up and escalate it to someone else. Both alternatives carry enterprise cost.

An alternative is redeploy efforts spent on training to build-in learning opportunities (based on Web 2.0) into operational solutions that people use each day. In this way, they don't have to search for knowledge (it is already there and contextualised for them). It will also more likely be what they need because it will have been created from others that do the same thing each day. Finally, because it is all in one place (the line-of-business systems), it will, (paradoxically for Web 2.0) be easier to control centrally (compliance, consistency etc.). Learning is not about remembering things from years past. It is about being able to find, validate, synthesize, leverage, communicate, collaborate, and problem solve with the facts, ideas, and concepts. In real-time.

This creates new enterprise challenges such as providing usable tools for the production of user generated content and providing an effective incentive mechanism (for them to produce the content). These are being addressed though. MSFT has recently announced Semblio - a product/SDK using .NET/WPF to develop collaborative learning material. The SDK is available now and the product part (for content creation by non-technical users) will come with Office 14. Other mash-up tools e.g. Popfly can also be appropriate for creating learning content and let's not underestimate the power of video podcasting. 5MP+ cameras (required for workable video quality) are almost main-stream. BT have taken a lead on pervasive, on-demand, pod-cast based learning through its Dare2Share programme.

In terms of incentivizing, management consultants are learning to culture the concept of microcelebrity within the organisation. In many cases, this can be met by technology; implementing (and publishing on the Intranet) a ranking mechanism for training content. In other cases, it will involve some BPR. Whichever route is taken, we'll all be happier even if we lose our little changes-of-pace.

UPDATE: Mahalo have recently pivoted toward consumer video-based learning. Although the goal of creating "thousands of original, high-quality videos each week" seems unrealistic.

PPS reshuffle

(Originally posted 28 January 2009).

PPS has been reshuffled (http://blogs.msdn.com/sharepoint/archive/2009/01/23/microsoft-business-intelligence-strategy-update-and-sharepoint.aspx). PPS has been a successful MSFT technology. It has taken MSFT from a standing start to a player in the competitive PM market, to the point where Gartner’s last Magic Quadrant for PM showed MSFT in the Visionary category. As with Content Management Server previously, MSFT will consolidate PPS Monitoring and Analytics into their flagship platform MOSS from mid 2009. PPS Planning will be withdrawn. The core MSFT BI stack remains unchanged.

Most customers will actually benefit from this development. Customers that want PPS Monitoring and Analytics (the bulk of those that have any interest in PPS) should see a reduction in licensing costs: a MOSS Enterprise CAL is cheaper than a PPS CAL. Many customers will already own it.

Why did this happen though?

The target market for PPS Planning (in its current form) is saturated. Most organizations that need end-to-end PM already have solutions. MSFT had only a modest share of the PM application market. SAP, IBM and Oracle between them took around half. The market was also fragmented at the SME end with around half again being smaller vendors often based upon the MSFT BI stack e.g. Calumo.

Customers in newer PM sectors e.g. Insurance/Legal and the SME market were gradually beginning to adopt PPS Planning; but working with PPS models was arguably more of a technical activity than they were used to. PPS Planning lacked web-based data entry. The Excel add-in could be slow when connecting to PPS from Excel or publishing drafts. PPS was well served with complex financial consolidation options but this is a niche activity (MSFT make revenue from CAL licenses after all). MSFT needed to democratize the PM market but this needed associated change in business practice with planning activities becoming both decentralized and collaborative. It was unlike the other products in the MSFT BI suite and ultimately did not drive forward the core MSFT proposition of “BI for the masses” (http://www.microsoft.com/presspass/features/2009/jan09/01-27KurtDelbeneQA.mspx).

Finally, there's maybe an alliance angle. MSFT jointly launched Duet with SAP, allowing easy interaction with SAP and MSFT Office environments (esp. Excel and Outlook). There could possibly be a strategic direction here to add financial planning and consolidation to a future release of Duet (or some future combination of Duet, Gemini and/or Dynamics). This would tightly integrate MSFT and SAP at a PM level. What would be in this for SAP is immediately unclear given their acquisition of OutlookSoft (and incorporation into their own product suite as SAP Business Planning & Consolidation [BPC]) but there has been a significant rise in the last year of existing SAP customers wanting to bolt-on PPS/SSRS onto their SAP deployments; basically because it is difficult for in-house resources to extract the data from SAP and present it themselves or because quotes from SIs are routinely in the tens of thousands of dollars per report. Partnering with MSFT here and using PPS Monitoring and Analytics from within MOSS may improve SAP client satisfaction around data access. MSFT would similarly benefit from increased enterprise access.

Universal enterprise UX - Part 1 (Concept)

(Originally posted 6 November 2008).

From a UX perspective, portals are a great way of centralising; personalising and publishing the various functions that a user needs to undertake operationally. They are ideal for combining both structured and unstructured data and contextualising between the two. We’re all essentially information workers and users do similar information-centric operations during their day e.g. browsing, analysing, contextualising, starting new events (based upon old events) and data entry. There should be the same UX available for them to do these actions. Not just a similar portal but the same (configurable) web parts.

Where are these gadgets? SAP has a clear concept of UX reuse that plays partly in this space but where are they for other platforms? There are a finite dozen or so UX interactions (Find, Alert, Link, Flag, Copy, Browse, Cascade, CRUD, Audit, Confirm, Search and Analyse) that can be mapped to six or less web parts. Two of these web parts would account for most UX interactions i.e. Web part 1 (Find, Alert, Link, Flag and Copy) and Web part 2 (Browse and Cascade). Web part 1 would mainly deal with lists and Web part 2 would mainly deal with hierarchies.

These could be mix and matched with social networking and content management web parts. That’s just six or so interrelated web parts that could handle the majority of bespoke operational applications today. For example, an order processing system user would be able to search for a particular user, see all their past orders (and correspondence), analyse their propensity to cross/up-sell and take or query their order and flag particular customers for a follow-up call. When the IT department roll-on new functions e.g. recording user feedback at POS, users will know how they work as the web parts would behave the same as existing functions. For the back office, ongoing development, training and testing of incremental functionality would be reduced. MOSS is already treated seriously as an application development platform (http://www.andrewconnell.com/blog/archive/2007/09/24/6116.aspx) as it handles the plumbing every application needs. Some organisations have worked on solutions where they have deployed a similar concept for customers (reusing the same web parts each time) but this approach only really pays dividends with incremental development or new solutions that use the same UX web parts. Why is the universal enterprise UX not more prevalent (?).