JRNY 360 Practice Notes

Short reflections and practical tools for people working in complexity.

Relational Practice Isn’t the “Soft Stuff” in M&E — It’s the Method

skills and practice Feb 24, 2026

Most monitoring and evaluation (M&E) systems fail for a simple reason:

They’re built to extract information from people, not build the conditions for people to tell the truth.

So we get dashboards that look “healthy,” reports that read “confident,” and learning that stays safely superficial — while the real dynamics (power, trust, fear, fatigue, harm, adaptation) remain invisible.

At JRNY 360, we take a different stance:

Relational practice isn’t an add-on to rigorous M&E. It’s what makes M&E rigorous.

Because the quality of your evidence is inseparable from the quality of your relationships.

The problem: “Good data” can be bad evidence

If your M&E relies on people’s input — interviews, surveys, focus groups, reflection sessions, partner reporting — then you’re not measuring a program.

You’re measuring what people feel safe enough to say about a program.

Here’s what “low-relational” M&E often produces:

  • Performative reporting: partners write what they think funders want to hear.

  • Sanitised learning: risk, failure, or harm gets quietly edited out.

  • Participation fatigue: communities are repeatedly asked, rarely informed, and seldom see change.

  • False confidence: numbers rise while trust falls.

  • Evidence without context: outcomes are reported, but drivers are unknown.

And then we wonder why strategy doesn’t shift, why inequities persist, or why “scaling what works” doesn’t work.

Relational practice is an evaluation competency

Relational practice is often framed as a nice-to-have — the human touch. In reality, it’s a core competency that shapes:

  • What questions get asked

  • Whose knowledge is treated as credible

  • What people will disclose

  • How meaning is made from data

  • Whether learning turns into action

In other words:

Relational practice is how you reduce bias, surface complexity, and improve validity in real-world M&E.

Not by pretending power doesn’t exist — but by designing with it in mind.

What is relational practice in M&E?

Relational practice is the discipline of intentionally creating the conditions for honest evidence and shared learning.

It means treating evaluation not as an audit of people, but inquiry with people.

In practice, it looks like:

  • Building trust and clarity before collecting data

  • Designing participation to be safe, respectful, and useful

  • Recognising the power dynamics in who asks, who answers, and who benefits

  • Sharing findings in ways that create accountability and enable adaptation

  • Creating feedback loops where people can disagree, correct, and contribute

Relational practice doesn’t mean avoiding hard truths.

It means being able to reach hard truths.

Relational M&E is built on three shifts

1) From “proving” to “improving”

Traditional M&E asks: Did we hit the target?
Relational M&E asks: What’s changing, why, for whom, and what do we do next?

2) From extraction to reciprocity

Traditional M&E collects data and disappears.
Relational M&E makes participation worth it — through transparency, feedback, and shared benefit.

3) From control to sensemaking

Traditional M&E treats complexity as noise.
Relational M&E treats complexity as signal — and makes space to interpret it together.

What to measure when you measure relationships

A common fear is: “But relational practice isn’t measurable.”

It is — if you stop treating relationships as vibes, and start treating them as conditions that shape outcomes.

Here are examples of relational indicators you can responsibly use:

Trust & safety (leading indicators)

  • % of partners who report they can raise concerns without negative consequences

  • evidence of dissent in learning spaces (disagreement is often a sign of safety)

  • time-to-response when issues are raised

Reciprocity & usefulness

  • % of participants who receive findings back in a usable format

  • partner/community perception of whether M&E helped improve delivery

  • number of programme adaptations made based on feedback

Power & participation

  • who sets learning questions (tracked over time)

  • representation in sensemaking spaces (and who speaks)

  • perceived fairness of decisions informed by data

Relational practice doesn’t replace outcome measurement — it strengthens the integrity of it.

A quick self-check: is your M&E relational or extractive?

If you answer “no” to two or more, that’s your starting point.

  1. Do participants know how their data will be used — and can they opt out safely?

  2. Do partners/community members help define the learning questions?

  3. Do you share findings back in a way that’s timely and accessible?

  4. Do you have mechanisms for people to disagree with findings or add context?

  5. Have you tracked whether feedback actually led to changes?

The takeaway

High-performing M&E is not just a technical system.

It’s a relational system.

And if your M&E is producing neat reports but no real learning — or compliance but not candour — the issue may not be your indicators.

It may be your relationships.

At JRNY 360, we build monitoring, evaluation, and learning approaches that are designed for real-world conditions: complexity, constraints, power dynamics, and the human realities of doing the work.

What Next?

If you’re redesigning your M&E approach (or trying to repair trust in an existing one), here are three next steps you can take immediately:

  1. Choose one programme and run the RELATE Cycle as a pilot

  2. Replace one reporting meeting with a structured sensemaking session

  3. Add one feedback loop that closes the loop with participants  

 

JRNY PRACTICE NOTES NEWSLETTER

Want Helpful Tips Every Week?

A weekly newsletter for people who lead teams, shape strategy, and drive change. 

You're safe with me. I'll never spam you or sell your contact info.