Software selection for the enterprise is a complicated process that needs a structure in order to be successful. Most standard frameworks deal with the features, pricing and fit but fail to include different risks of implementation and success. This post describes how to add risk to the framework so that your decision is de-risked.
Selecting software is a critical decision with significant implications. Evaluations limited to function and cost are insufficient given the financial and organizational investment. Assessing comparative risk in the selection process can help save your team’s sanity and, perhaps, your own.
Two Real-Life Stories
For their human resources management and payroll needs, a 130-person company selected a well-known cloud-based human resource information system (HRIS). Their implementation proved to be far more costly, complex and problematic than they envisioned. It was only after they had spent several hundreds of thousands of dollars more than budgeted for did they discover their selected HRIS was actually designed for companies with 10,000 employees or more.
A 2,000-person company selected an automated reconciliation software. During implementation, the system consultants struggled to get the software to perform at the promised volume and accuracy levels. After months of delays, cost overruns and hundreds of internal accounting staff hours alongside the consultants, the company finally abandoned the project and stayed with its manual processes and spreadsheets.
What Problem Do These Stories Highlight?
For both companies, the root cause of their problems stemmed from a failure in the software selection process to uncover significant risks they should have taken into account before their final decision. Oddly, while standard software system implementation methodologies typically include carefully identified project risks and mitigation methods, common software selection methodologies largely ignore assessment of risk when factoring decision criteria and scoring.
Through my experience implementing multiple enterprise resource planning systems, I developed a risk-adjusted software selection methodology that builds off the standard framework. In this article, I’ll introduce that methodology to help companies avoid unpleasant surprises after their software purchase decisions.
Standard Software Selection Framework
The standard software selection decision framework generally relies upon four areas of assessment, in which each area is weighted and scored resulting in a composite weighted score:
In the example above, Vendor A’s overall score is calculated as (6 x 0.4) + (5 x 0.25) + (3 x 0.25) + (7 x 0.10) = 5.1. This represents: (functional fit score x weight) + (technical fit score x weight) + (cost score x weight) + (vendor support/references score x weight). Each vendor receives a similarly calculated score. All vendors are then stack-ranked in order from high to low. Conspicuously missing is any explicit factor of risk.
Risk-Adjusted Software Selection Methodology
My risk-adjusted selection methodology utilizes the two-dimensional baseline framework noted above and extends the framework by a third dimension: risk. I identify and assess the risks in each area and explicitly modify each area’s weighted score by a comparative risk factor, either positive or negative.
Comparative risk is intended to measure the relative risk of the software solutions being considered. The applied risk factor becomes (1 + X) where X is according to the risk scale in the depiction above. The result (e.g. 1 + 0.3 = 1.3, or 1 – 0.6 = 0.4) is multiplied to the weighted area score, thereby adjusting the area’s score higher or lower. If a given risk area has no differentiation between the considered solutions, its comparative risk is neutral.
In the example below for Vendor A, assume the risk factor for technical fit is +0.4, and the other areas are as depicted. Vendor A’s overall score is now calculated as [(6 x 0.4)(1 - 0.2)] + [(5 x 0.25)(1 + 0.4)] + [(3 x 0.25)(1 – 0.6)] + [(7 x 0.1)(1 + 0)] = 4.67.
The risk-adjusted score of 4.67 for Vendor A is materially lower than its initial score of 5.1. That’s certainly enough to shift the vendor stack rank for similarly capable solutions.
Identifying And Assessing Comparative Risks
Now that I’ve introduced the notion of comparative risks, the obvious next question is how you can go about identifying risks and assessing those risks. Risks will depend on the type of software application being considered. Risks for credit card processing software will be quite different than for customer relationship management software.
To help you apply this methodology to your software selection, I’ll provide general guidance that should allow you to confidently identify risks using the four assessment areas of the framework used here.
Functional fit often focuses on the “what.” More important and more revealing of risk is “how.” How the software accomplishes functions is best understood through demonstration. The demonstration should not only walk through processes, but also include a representative sample of implementation configuration steps. Stronger still are demonstrations utilizing your data, not merely the vendor’s sample data. The vendor should show you, not merely tell you.
Risk identification and assessment of technical areas are a combination of “what” and “how.” Pay particular attention to how user access is managed through security administration. Also understanding how upgrades are handled is important regardless of whether the solution is on-premise or cloud hosted. Lastly, vendors should demonstrate operation on all types of devices, browsers and operating systems under which you expect to operate.
Risk around cost is highest for implementation, user counts, additional overhead staff, and contractual terms and conditions that affect subscription or license costs. For example, if the software has a volume-based pricing model and tier limits are exceeded, what happens? Does the contract allow you to move to a lower tier if your volumes decline?
Vendor Support And References
Although reference calls are generally assumed to be positive, they are often highly revealing regarding surprises and risks. Asking references about what surprises they discovered or what they wish was known in advance will garner a wealth of additional risk knowledge.
One More Story
A public company suffered multiple financial restatements. The company’s chief financial officer needed to fix the problem and was under tremendous time pressure. He had accomplished a short software assessment of two options and was convinced Vendor B was the right choice. I spoke with him regarding a comprehensive, but fast, software selection as I questioned whether they had fully examined the risks of the solutions. He agreed to this methodology, and we accomplished the selection in record time.
Through the course of the selection, the client’s decision changed 180 degrees, unambiguously landing on Vendor A. The power of risk-adjusted software selection prevented an enormous mistake and allowed the company to regain the integrity of its financial statements.
According to the APQC General Accounting Open Standards Benchmarking survey (2,300 companies participated) - Cycle Time for Monthly close ranges from 4.8 days or less for the top 25% of companies to 10 days or more for the bottom 25% of performers.Learn How To Be a Top Performer
According to a study by Robert Half & the Financial Executives Research Foundation (FERF), only 13% of F&A teams have utilized advancements in technology solutions, with the majority of CFO’s admitting they still struggle with painful aspects of account reconciliation.Read About AI-Driven Cost Savings
Our newsletter is built for F&A professionals to deliver insight into new technologies, advice to advance your career and tools/tips to make your job easier so you get your evenings and weekends back.
Click here to watch a short product video or set up time to see for yourself how simple account reconciliation can be.