Software Selection: The Limitations of an Exclusive Spreadsheet Driven Approach
Today, companies are reevaluating their online and omnichannel capabilities to align with new digital consumer trends and reduce their total cost ownership in these uncertain times. From 2010-2019, total US online retail sales increased at 15.1% CAGR versus 3.9% for total retail sales. US eCommerce retail sales penetration surged from 6.4% in 2010 to 16.0% in 2019. (Source: Digital Commerce 360) As a result, IT budgets continue to grow. Based on 1,400 global CIOs, 55% of their IT budgets increased. (Source: Deloitte Tech Trends 2020) The accelerated omnichannel response to Covid-19 further compounds this growth. Digitally enabled and flexible omnichannel order routing and delivery mechanisms such as curbside pickup are just two examples.
This renewed focus on future-forward digital capabilities has many companies reevaluating their current enterprise application stack. Integral parts of this process are both the technical evaluation and software selection processes. This article highlights one of the biggest myths about the software selection process: the belief that a single weighted numeric driven spreadsheet is entirely adequate for software selection.
To be clear, having a systematic and well-organized approach to software selection is essential. For some, this might imply being able to distill all the considerations and decisions down into a numeric equation. This is a tempting approach and may satisfy those seeking a pure ‘objective’ method that can be easily explained to executive leadership. However, thinking you can thoroughly select software based solely on a spreadsheet proves time and time again to be an unrealistic practice. It ends up still introducing many subjective elements into the evaluation you were striving to avoid. For example, perhaps one of the metrics might be ‘End User Flexibility.’
- How would this flexibility be weighted, and by whom? The actual end-users might weigh this as extremely important. In contrast, IT might weigh it less critical because of the possible increase in complexity and system performance costs associated with it.
- What specific features need this flexibility, and is that specified in the weighted spreadsheet?
- Is there organizational consensus among end-users that this flexibility is needed?
- Is there a shared definition of what ‘flexibility’ even means? It could mean being able to completely customize the front end, while for others, it could mean simply having a few more drop-down options than today.
Evaluating platform cost is often the poster child for a metric-driven spreadsheet approach. Yet, even creating an accurate apples-to-apples comparison across multiple vendors for 3-Year Total Cost of Ownership (TCO) can be challenging.
- If one platform is Platform-As-Service (PaaS) while another is Software-As-Service (SaaS), are the costs split out in enough detail to fully understand the differences?
- If one platform is more reliant on 3rd party integrations than others, is the cost of those 3rd party integrations included in the calculation?
- Are additional 3rd party integration costs considered inherently more of a negative or as a positive?
- If some of those 3rd party integrations are ones your company already has, how are you accounting for keeping the existing ones?
- How are first-year implementation costs being evaluated if Service Integrators (SIs) are not participating in the evaluation process?
Evaluating functionality and capability across platforms is likewise a deceptively complex endeavor. Finding an ‘off the shelf’ listing of all platform functionality in a spreadsheet format is difficult enough to build or find, especially for free. Keeping each platform’s functionality up-to-date and evergreen adds another layer of difficulty as new features are constantly being released.
- Even with a spreadsheet of features and functions in hand, who and how will an evaluation be done to indicate if a feature is fully supported?
- If all platforms support a feature but functionally execute it in different ways, are they all given the same score?
- Today’s leading solutions all handle close to 90% -95% of typical use cases and business requirements. How are key requirements systematically distinguished from secondary concerns?
In conclusion, weighted scorecards certainly have their place in the software selection process. Still, they should be considered as only one of many inputs into a broader and more holistic evaluation initiative. A scorecard might, on the surface, seem simple and easy but creating one that truly reflects business needs and strategic objectives is a more complex task that takes industry knowledge, requirement discovery, collaboration with the client, and refinement. More importantly, it takes a detailed understanding of each client’s specific needs and requirements, both current and future.
Determining which factors are most important and the weights associated with each should be a collective approach across business and technical stakeholders. Many other inputs into the process will be needed (e.g., reference interviews, technical and operational strengths, business strategy, competitive imperatives, etc.) to fully round out a clear picture of what new solution is best aligned to future requirements. There is both a science and an art to software selection.
Selecting enterprise software is no doubt a critical task. Making the wrong choice can lead to cost overruns, expensive customization, disillusioned end users, poor customer experience, and lost revenue. New software implementations are expensive but selecting the wrong platform can be even more costly! As a result, companies frequently turn to consulting companies like New Elevation that specialize in software selection for this very reason.