top of page

In this complex world of program evaluation research, two approaches are critical and dependent on each other: the Theory of Change (ToC) and Logic Models. As tools of strategic planning and evaluation, both offer complementary frameworks for understanding interventions. However, their differences often lead to confusion, especially in deciding which to apply first. For evaluators who often find themselves in the crosshairs of this debate, this article delves deeper into the distinctions, applications, and intersections of both models, with an emphasis on short-term evaluation projects.


Understanding the Basics

Theory of Change (ToC): The Theory of Change is essentially a narrative or story, presenting a comprehensive depiction of problem identification to its intended or ultimate outcome. It draws a detailed picture right from the intervention's commencement to its eventual outcomes, highlighting the pathways and the underlying assumptions. The ToC can assume any shape, even going sideways and providing understanding about the broader context, interactions, and the process at play.

Logic Models: Logic Models are more schematic. As per the W.K. Kellogg Foundation (2004), Logic Models provide a simplified picture of a program activities, processes, and outcomes, showing the logical links among the resources that are invested, the activities that take place, and the benefits or changes that result. In essence, they encapsulate how inputs lead to processes, outputs, outcomes, and impacts.


Image of Theory of Change/Logic Model



Which Comes First? Understanding the Sequence

ToC and logic models are complementary in nature, though the strategic application of ToC and Logic Models depends significantly on the purpose and scope of study or project:

When embarking on a large or complex project, where a deep understanding of context, stakeholder dynamics, and the intricacy of an intervention is required, initiating with the Theory of Change is plausible. ToC aids in understanding the broader context and the various pathways of change. Following this, a Logic Model can be extrapolated to distill the line details. ToC is comprehensive, expounds the theory and clarifies the assumptions whereas logic model depicts a visual summary or a snapshot of the proposed theory.

When an existing programme or intervention requires needs narrow evaluation or is limited to few objectives, then starting with Logic Model is appropriate. Also, if the need is more towards its performance evaluation or feedback loops, then again starting with the Logic Model might be advantageous. Its bottom-to-top or left-right design offers clarity on perceived relationships among resources, activities, and outcomes. Given the constraints of time and resources in short-term projects, Logic Models aids in strategic decision-making. However, for a more complex or deep understanding of context, players, and assumptions, ToC provides richer insights, even if it demands background research or ground truth.


Conclusions:

Both the Theory of Change and Logic Models stand as invaluable assets for evaluators, and both are applied in a complementary fashion. At times, evaluators or researchers develops the initial Logic Models and then followed by the development of theory of change narrative. In other instances, there is a need to identify ‘root causes’ and capture in-depth information. ToCs then becomes the core and more nuanced than Logic Models. As Breuer et al. (2016) noted, ToC helps in understanding uncertainties by using a backward mapping approach. On the other hand, Logic Models offer a linear way to indicate the linkage between inputs and outcomes.

Choosing between the two is a matter of relevance and applicability. As practitioners, our choices should hinge not just on project goals but also on the depth, breadth, and context we aim to explore and understand.


References:

W.K. Kellogg Foundation, (2004). Logic model development guide. Battle Creek, MI.

Breuer, E., Lee, L., De Silva, M., & Lund, C., M. et al, (2016). Using theory of change to design and evaluate public health interventions: a systematic review. Implementation Science, 11(1), 63.


8 views0 comments

Introduction From the perspective of a researcher and evaluator, it's not uncommon to be immersed in data, logic model spreadsheets, graphs, and survey outcomes. These tools, while essential, often overshadow a more people-focused approach that holds equal merit: participatory evaluation. This method transforms evaluation into a collective endeavour where local stakeholders transition from subjects to active contributors in gauging a program's impact. Also, such an approach was rooted in the "principles of emancipation and social justice" King, A.J., Cousins. B.J., & Whitmore, E. (2007).

Data Collection – Quantity vs Quality? Traditionally, data collected through primary interviews and secondary sources have steered the evaluation process. These in-depth interviews can often resemble high-stake grilling sessions, attempting to amass a vast amount of data within a limited time. Such a method often transcends into a mechanical process. In contrast, participatory evaluation champions collaboration. Community members transcend their roles as mere data points, offering precise, localized insights. The quality of data collected from participatory evaluation surpasses the usual standards. When fused with quantitative data, it provides a holistic view. Moreover, this method transforms participants into co-evaluators, fostering empowerment and heightening engagement. This, in turn, guarantees the acquisition of quality data.


Building Community Capacity The interesting aspect about this inclusive approach is the long-term value it offers. It equips community members with evaluation skills, preparing them to identify their needs and strategize initiatives. This empowerment seeds a perpetual culture of self-evaluation and continuous enhancement. Participatory evaluation often adds value when there is an essential layer of contextual understanding. When community members help in interpreting findings, they imbue the results with local socio-cultural nuances. Importantly, this approach adheres to ethical standards by giving the community control over their data's utilization and interpretation.


Use of Asset Mapping

Asset mapping stands out as a potent technique to initiate participatory evaluation. Initially introduced within Kretzmann and McKnight's asset-based community development (ABCD), it engages community members to recognize local strengths, resources, and prospects (Lum, T., McCleary, S.J., & Lightfoot. E. (2014).

Application of Cyclical Steps

ABCD encompasses five key assets; individuals, associations, institutions, physical assets and connections. This asset inventory, once mapped, serves as a catalyst for community engagement, ensuring that the evaluation is grounded in the community's own perspective. In order to map the assets a cyclical steps involving Probe, Train and Conduct could be undertaken thus providing providing participants a systematic approach to collecting data.


Navigating the Challenges While promising, the participatory approach isn't without its obstacles, notably around community politics and power hierarchies. One pressing concern is striking a balance between the structured methodology and the more fluid participative style. It's imperative to recognize these dynamics and devise strategies that uphold the inclusivity and fairness of the evaluation process.


In conclusion, participatory evaluation, rooted in principles of social justice and inclusivity, emerges as more than just a methodology. When academic theories blend with on-ground experiences, it evolves into a philosophy which then elevates the evaluation process but also ensures it is encompassing.


References

King, A.J., Cousins. B.J., & Whitmore, E. (2007), 'Making sense of participatory evaluation: Framing participatory evaluation', New Directions for Evaluation. https://doi.org/10.1002/ev.226

Lum, T., McCleary, S.J., & Lightfoot. E. (2014), 'Asset Mapping as a Research Tool for Community-Based Participatory Research in Social Work', Social Work Research. https://doi.org/10.1093/swr/svu001

6 views0 comments

Machine learning modelling is known for predicting performance of marketing activities, however, this is a story of how we developed an image classification model that predicts the performance of Instagram advertisements and provides an opportunity to uplift creative standards. “Will consumers engage with this creative element?”, “and to what extent is the emotional connect?” were the questions in the minds of our researchers?

We selected a tea brand's Instagram image adverts for this research. And we felt that there was scope for improving the engagement rate (viz. likes, comments and saves) of Insta adverts. We set three research objectives: 1) To identify high performing Insta adverts. 2) To build a profile of high performing campaign elements. 3) To explore building a database of creative campaigns, especially those that have an emotional content. This brand would execute different type of campaigns – festive, new product launch, new offer, sales offer etc. Our initial scan showed that the engagement rates varied significantly between different campaigns.


We developed a 3 Step C-P-M Framework or simply known as Classification-Prediction-Measurement Framework.

In the 1st stage of classification, historical Insta adverts dating back to 2018 and 2019 were classified using natural language processing methods and image recognition methods. Three classes viz. High Performers, Medium Performers & Negative Performers were formed based on factors like a presence of a primary image, headline text, number of likes, offer prominence etc.


In the 2nd stage, new Insta adverts were checked against i) the above classes and ii) automated eye tracking model. Prediction scores revealed the class to which the new Insta advert would be tagged. This framework allows creative time to revise the new adverts.


In the 3rd stage, upon release of Insta adverts, the performance key performance variables like no. of likes/shares are correlated with in-store KPI viz. average sales per square foot in order to understand the impact of lift on in-store KPI.

Also, due to application of automated eye tracking and machine learning models client teams are able to detect creative concepts that has maximum emotional impact on consumer emotions. This enables in lifting of creative standards in what we term it as ‘Creative Idea Leapfrogging’.

8 views0 comments
bottom of page