Seven Thoughts From the American Evaluation Association Conference
In November 2025, Leonard Parker Pool Institute of Health (LPPIH) Executive Director Samantha Shaak, PhD, and Cheryl Arndt, PhD, Manager of Community Health Evaluation and Impact, represented LPPIH at the annual conference of the American Evaluation Association (AEA) in Kansas City, Mo.
The AEA conference brings together evaluators from across sectors to explore how evaluation can be more rigorous, ethical, inclusive and impactful. Below are seven reflections Sam and Cheryl brought back from Evaluation25 – ideas that continue to shape how we think about data, community partnership and learning in our neighborhood-based work.
1. Evaluation as a learning practice, not just for accountability
A recurring theme was the shift from evaluation as compliance toward evaluation as continuous learning. Many presenters emphasized creating feedback loops that support real-time adjustment rather than post-hoc judgment.
This mindset reframes evaluation as a shared learning journey among funders, organizations and communities: one that values curiosity, iteration and humility. When evaluation is positioned this way, it becomes a tool for growth rather than fear.
2. Beneficial community engagement
Community engagement is not a box to check. It is central to whether initiatives succeed or fail. Across sessions, evaluators highlighted trust and transparency as foundational, along with accessibility, responsiveness and timing.
This was especially evident in discussions about sharing data with communities. One organization described a data advisory committee that includes parents and students, ensuring findings are not only accurate but meaningful. They publish results publicly in plain language, paired with clear visuals and QR codes for easy access. The message resonated: Data is most powerful when it is understandable and useful to the people it represents.
3. Responsible and constructive use of artificial intelligence
The power of artificial intelligence (AI) was a prominent theme throughout the conference, as it is across many industries and fields. Evaluators are actively exploring how AI can support analysis, particularly when working with large volumes of qualitative data from surveys and interviews. At the same time, there was thoughtful discussion about the environmental costs of AI and the importance of protecting privacy.
One message came through clearly was that, “the AI train has left the station but how we use it matters.” While current tools show promise for text analysis, they still fall short in producing clear, accurate data visualizations without human guidance. Across sessions, presenters emphasized the need for transparency and proper citation when AI is used, reinforcing that AI should support, not replace, evaluator judgment.
4. Effective audience engagement
AEA has long championed “potent presentations,” encouraging evaluators to focus on message, design and delivery. This year, presenters demonstrated creative ways to bring those principles to life.
From asking participants to physically place themselves along a continuum and using humor and interactive prompts, to handing out small pieces of swag to encourage participation, sessions felt more dynamic and inclusive. These techniques reminded us that engagement is not a distraction from content; it is often the gateway to deeper learning.
5. Respectful compensation for community members
There is growing consensus that it is unethical to extract time, stories and insight from community members without compensation. However, how organizations operationalize compensation and power-sharing varies widely.
One presenting organization stood out for formalizing a philosophy and policy around compensating community members. By clearly articulating expectations, decision-making authority and pay structures, its members have created a framework that promotes fairness and consistency. The takeaway: Compensation is not just about stipends; it is about respect, clarity and shared ownership of the work.
6. Future thinking as an evaluation tool
Several sessions pushed evaluators to think beyond measuring the past or present and instead ask, “What futures are we helping to shape?” Future thinking invites evaluators to consider long-term implications, uncertainty and multiple possible outcomes.
Rather than predicting a single future, this approach encourages scenario-building and adaptability – tools that feel especially relevant in complex place-based work. Looking at evaluation tools that recognize change as complex, helps us understand that change is often not linear. For LPPIH, future thinking reinforces the idea that evaluation can be both reflective and anticipatory, helping communities prepare for what may come next.
7. Relationships are the infrastructure
Across topics – from AI to compensation to data sharing – one truth surfaced again and again: Strong relationships make good evaluation possible. Technical skill matters, but trust, communication and respect are what sustain the work over time.
For community-centered evaluation, relationships are not a “soft” add-on; they are the infrastructure that allows data to be collected ethically, interpreted accurately and used meaningfully.
As Sam and Cheryl returned from AEA, these reflections affirmed much of what LPPIH strives to practice while also stretching our thinking in new directions. The conference reinforced that evaluation, at its best, is not just about measuring impact. It’s also about learning with communities, honoring their expertise and building systems that support well-being now and into the future.