We all recognize technology and digitization are essential for our supply chains but quantifying the ROI of a potential investment is easier said than done.
ROI should ideally be calculated across more than one dimension:
- Improvements in P&L metrics: an increase in revenue, gross and/or net margin + improvements in metrics that positively impact P&L metrics
- Savings from the replacement of legacy, complicated, and/or non-value-added software
- Savings from a reduction in manual or analog tasks required to complete the same job that the new software is performing
- Improvements and savings from predictive, prescriptive, and diagnostic analytics insights formulated from clearly articulated problem use cases
It is important to have clear expectations on ROI you will enjoy in the near term versus results that may appear over time. Staying aligned on those expectations internally and with the software vendor makes for stronger partnerships.
Let us dive into some of the points above in a little more detail.
Savings from the replacement of legacy, complicated, and/or non-value-added software
Companies should not invest in new software systems without first undertaking an effort to rationalize within their existing tech stack. A smart way to segment the valuable systems from resource drains is to plot them on a graph similar to the one below. How do they rate on quantified business impact versus the level of complexity to use? Older, legacy systems are generally more complex to use, and the more complex they are to use, the higher the cost of people to maintain these beasts. While there are exceptions to the rule, in most cases, high complexity also tends to be correlated to time to value, and the higher the complexity longer the time to value. If we draw from this segmentation analysis, the zone to invest in should be in systems that are easy to use and low on the complexity scale while also having a high business impact and quick time to value. Simultaneously, such a study should drive action to phase out complex systems with low business impact and a long time to value.
Savings from a reduction in manual or analog tasks required to complete the same job that the new software is performing
This is a view that I love to play around with when quantifying improvements for our customers. Team members often spend time on manual, repeat tasks, and helping each job role reroute some of that time to more strategic items can be hugely rewarding to both the customer and the software vendor. Quantifying improvements here is straightforward: first, list the main job and task areas by job role, especially the tactical tasks. Layer that with the average salary for that job function. The percentage of time spent on those tasks * salary of job role * number of people in that job function gives us the savings by department that could arise from replacing the tactical with automated systems. The goal is to have all job roles spend as much time and brains on the grey, strategic boxes.
Improvements and savings from predictive, prescriptive, and diagnostic analytics insights formulated from clearly articulated problem use cases
The most significant impact in dollars from high-value software systems comes from the data. Anything massive in potential impact, as expected, is tough to actualize, and results actualize over time in waves. It starts with clearly articulating the problem use cases the company wants to solve specific to the departments the software system touches and layering those use cases on top of the available data sets. The data sets aggregate organic data created from using the new software and inorganic data ingested by the software through integrations. With every use case, the quickest time to value comes from the diagnostic reports, and you might get to 50% to 60% of target impact in the first 6 to 12 months, as data is gathered and fed into reports. For example, suppose the goal is to improve gross margins by 3%. In that case, 50% of that target could be achieved in the first 12 months through a series of steps: getting suppliers a digital profile and identity, scoring them with every order, and using reports to drive a positive cycle of assigning projects to the best-performing suppliers. With more data and history collected, the next 12 months may involve predicting the most successful suppliers by product set and accelerating the smart assignments resulting in the next 20% of gains against goal. Macroeconomic data can be incorporated and relevant syndicated data purchased and added to refine the process over time for more percentage improvements. The predictive models thereby get more robust learning over time to add even more intelligence and impact the results (more about different data set types to improve the quality of your analytics roadmap here.)
I hope this post helped provide some guidance on how to think about and calculate ROI from software systems.