Skip to main content

Research Foundation

KoNote is designed around research on what helps participants succeed — not just what's convenient for agencies or funders.

Why This Matters

Most outcome tracking software is designed backward. It starts with funder reporting requirements and works down to data collection. Participants become data sources rather than partners in their own progress.

KoNote takes a different approach. We started with research on what actually improves participant outcomes, then designed software to support those practices. The reporting features exist, but they're not the foundation.

The Research

Feedback-Informed Treatment

Research by Scott Miller, Barry Duncan, and colleagues has consistently shown that routinely collecting client feedback and using it to guide services leads to significantly better outcomes. Clients are more likely to achieve meaningful change, and at-risk cases show the greatest improvement.

The key finding: the act of systematically asking clients about their experience strengthens the working relationship — and that relationship is the strongest predictor of outcomes across service types.

Key source: Miller, S.D., Duncan, B.L., Brown, J., Sorrell, R., & Chalk, M.B. (2006). Using formal client feedback to improve retention and outcome: Making ongoing, real-time assessment feasible. Journal of Brief Therapy, 5(1), 5-22.

Collaborative Documentation

Studies on collaborative documentation — where clients participate in creating their own records — show that most clients find it helpful and that it improves treatment adherence. When clients see and contribute to what's being written about them, trust increases.

Transparency isn't just ethical; it's clinically effective. Clients sense alliance ruptures before service providers do. Routine feedback catches problems early, before clients disengage.

Key source: Stanhope, V., Ingoglia, C., Schmelter, B., & Marcus, S.C. (2013). Impact of person-centered planning and collaborative documentation on treatment adherence. Psychiatric Services, 64(1), 76-79.

Brevity and Simplicity

Implementation research consistently shows that brief tools see better compliance than lengthy ones. Even single-item measures can be psychometrically valid when designed well. The Outcome Rating Scale and Session Rating Scale work because each takes under a minute.

When feedback tools feel like administrative burden rather than clinical support, compliance becomes "empty" — going through the motions without genuine engagement.

Key source: Campbell, A., & Hemsley, S. (2009). Outcome Rating Scale and Session Rating Scale in psychological practice: Clinical utility of ultra-brief measures. Clinical Psychologist, 13(1), 1-9.

How This Shaped KoNote

Each research finding informed specific design decisions:

Research Finding KoNote Design Response
Asking for feedback strengthens the relationship Progress notes prompt for participant perspective, not just staff observations
Clients know when something isn't working Progress charts are designed to be shared with participants, not hidden in reports
Transparency builds trust The interface is designed so staff can show participants their own records
Brevity improves compliance Quick notes are genuinely quick — structured notes only when needed
Feedback should inform the relationship, not a report Metrics connect to individual plans, not just aggregate reporting
Consent is foundational Participant setup requires consent confirmation before any data entry
Implementation matters as much as tools Customisable terminology and workflows to fit how you actually work

Outcome Measurement

KoNote's metrics aren't arbitrary — they're built on over 50 years of research in goal-setting, self-efficacy, and outcome measurement. Three distinct dimensions — Goal Progress, Self-Efficacy, and Satisfaction — capture the full picture of participant change. Each is measured through participant self-report, which is the valid methodology for subjective outcomes. And KoNote's AI goal builder applies eight research-grounded criteria to help coaches write targets that actually drive better outcomes.

Specific Goals Work Better

A goal like "build confidence with English" sounds reasonable, but research shows it leads to dramatically worse outcomes. Locke and Latham's meta-analyses across 35 years found that specific goals produce 250% better performance than vague "do your best" goals — not just better measurement, but better actual participant progress.

This isn't about being pedantic. Specific goals work because they direct attention, energise effort, increase persistence, and trigger strategy development. KoNote's AI goal builder pushes coaches toward specificity — validating targets against criteria like observable behaviour, conditions, and success thresholds — because it genuinely helps participants succeed.

Key sources: Locke, E.A. & Latham, G.P. (2002). Building a practically useful theory of goal setting and task motivation. American Psychologist, 57(9), 705–717. Doran, G.T. (1981). There's a S.M.A.R.T. way to write management's goals and objectives. Management Review, 70(11), 35–36.

Self-Report Is Valid

A common concern is that self-report metrics are "soft" or less reliable than objective measures. The evidence says otherwise. For subjective experiences — confidence, satisfaction, perceived progress — the participant's own report isn't a fallback. It is the primary valid measure. The NIH's PROMIS framework, the most comprehensive patient-reported outcome system ever built, is founded on exactly this principle.

What makes self-report valid isn't who reports — it's how the question is anchored. Self-efficacy (a person's belief in their ability to perform specific tasks) must be measured domain-specifically. You can't just ask "are you confident?" You need to ask about specific behaviours: "How sure do you feel about being able to cook a healthy meal?" KoNote follows this principle — every metric is anchored to the specific target behaviour.

Key sources: Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents. PROMIS — NIH Common Fund.

Three Dimensions, Not One

Factor analyses across multiple validated frameworks consistently identify three distinct dimensions of outcome: what a person is doing (functional status), what they believe they can do (self-efficacy), and how they feel about their situation (satisfaction). These are moderately correlated but can diverge in clinically meaningful ways.

Each divergence pattern tells the coach something different. High progress but low self-efficacy means "I'm doing it but I don't trust myself yet" — fragile change that needs encouragement. High progress but low satisfaction means the goal itself may be misaligned. Measuring all three catches patterns that a single metric would miss.

Key sources: Perera, H.N. et al. (2018). Resolving dimensionality problems with WHOQOL-BREF. Assessment, 25(8), 1014–1025. Scholz, U. et al. (2002). Is general self-efficacy a universal construct? European Journal of Psychological Assessment, 18(3), 242–251.

Individualised Measurement

Goal Attainment Scaling, developed by Kiresuk and Sherman in 1968, is the gold standard for individualised outcome measurement in social and health services. Each participant's goals are scaled on a continuum where the "expected" level is defined collaboratively — in concrete, observable terms — before the work begins.

KoNote's AI-generated target-specific metrics draw on this approach. Each level describes an observable state defined at goal creation — not generic levels applied the same way to every participant. Coaches can accept, edit, or decline these AI-generated scales, keeping the human in the loop.

Key source: Kiresuk, T.J. & Sherman, R.E. (1968). Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community Mental Health Journal, 4, 443–453.

From Research to Metrics

Each research finding informed specific design decisions in KoNote's outcome measurement system:

Research Finding KoNote Design Response
Specific goals produce 250% better outcomes (Locke & Latham) AI goal builder validates targets against 8 research-grounded criteria
Self-efficacy must be domain-specific (Bandura) Self-efficacy prompt asks about the specific target behaviour, not generic confidence
Self-report is gold standard for subjective outcomes (PROMIS) All three metrics are participant self-report with behaviourally anchored levels
Three distinct dimensions emerge in factor analyses Three universal metrics per target: Goal Progress, Self-Efficacy, Satisfaction
"How sure do you feel" is less loaded than "how confident" Self-efficacy uses softer phrasing that works in trauma-informed practice
Independence is a culturally specific value Goal Progress level 5 says "part of my life" not "independently"

What We're Not Claiming

Important distinction

The research cited above studied clinical practices — not this software. KoNote is designed to support feedback-informed practice, but we haven't conducted randomised trials of KoNote itself.

To be clear about what we're saying:

  • We are saying: Research shows that feedback-informed practice and collaborative documentation improve client outcomes. KoNote is designed to make those practices easier to implement.
  • We are not saying: Using KoNote will automatically improve your outcomes. That depends on how you use it, how you implement it, and whether it fits your context.

KoNote is new software. We'd welcome partnerships with researchers interested in evaluating its effectiveness in real-world settings.

Implementation Matters

The research is also clear about what goes wrong. Feedback tools fail when:

  • Staff perceive them as surveillance rather than support
  • The feedback goes into a database but doesn't inform the next session
  • Filling in forms disrupts the human moment in a conversation
  • Tools are too long or complex for routine use

KoNote can't solve these problems by itself. Software is a tool, not a solution. Whether feedback-informed practice works at your agency depends on:

  • Leadership commitment to using feedback, not just collecting it
  • Staff training on how to invite honest feedback from participants
  • A culture where participant input genuinely shapes service delivery
  • Realistic expectations about what data can and can't tell you

If you're looking for software that will magically improve your outcomes without changing anything else, KoNote isn't that. If you're looking for a tool that supports the practices research shows actually work, we've tried to build that.

Questions?

Read more about KoNote's features, or get in touch if you'd like to discuss how it might fit your agency's approach.