Targeted quantitative research is a critical need for management consulting companies that supports and justifies existing strategy recommendations and data-driven decision making. Moreover, the typical management consulting skillset do not result in the most effective data science and machine learning outcomes. This case study explores one of our partnerships working as the quantitative research division of an Australian management consulting firm where we are responsible the continuous delivery and improvement of machine learning models and monthly data science reports.
Providing value-adding quantitative research and results requires both experience in translational data science, where the key drivers and pain points of business value and ROI can be accurately identified, and expertise in the assumptions and limitations of existing machine learning systems. A considerable amount of data science projects fail to deliver on their promised ROIs simply because machine learning models are being used blindly and applied outside of their effective context. In relation to this engagement, we continuously applied feedback from subject matter experts to effectively evaluate the efficacy of modelling assumptions to the industry and business environment.
In detail, our monthly analysis were derived from the interpretations of the machine learning models in respect to how the given inputs and attributes influenced the model predictions. Our team leveraged its expertise in interpretable machine learning by adopting model-agnostic interpretability methods to minimise the accuracy-interpretability trade-off in black-box models. This is done on a monthly basis for our management consulting partner given the regulatory trend of increased model transparency and explainable automated decision making systems. Moreover, our team strongly believes in a relentless evaluation of model interpretability in order to mitigate hidden biases arisen from the model or data collection process. Nevertheless, it is incredibly important to understand the influential drivers of automated decision making systems in addition to the domains in which the model assumptions breaks-down and results in suboptimal performance.
Our deliverance of quantitative research further warranted fast iterative cycles that accelerates the model development process by continuously incorporating stakeholder feedback and evaluating the candidate models based on quantitative effects on business value. We stand by our Data Science Success Framework by unifying technical data scientists, subject matter experts, workflow considerations, and productionisation strategies in order to deliver the best possible outcome. This adaptive process maximises the impact of the machine learning solutions and deeply entangles the data science workflow to the core business objects.