The evidence gapNg et al.'s framework, adapted to the Australian situation

In April 2026, Ng and colleagues published a flyway-wide audit of conservation actions in the Journal of Applied Ecology. They asked a question the field had been avoiding: of the things conservation managers actually do, how many are backed by published evidence that they work? The answer is uncomfortable. Education runs ahead of its evidence base. So do most of the rest. Australia was represented in their sample by a single reserve.

FSB adapts their framework to the Australian situation — coastal, non-breeding, no hunting, no farmland adjacency — and uses it as the spine of what Stewards record.

Ng, S. O., Sung, Y. H., Yu, Y. T., Tsang, T. P. N. & Lee, R. H. (2026). A call for evidence-based conservation: Securing the future of waterbirds along the East Asian–Australasian flyway. Journal of Applied Ecology, 63, e70389. doi.org/10.1111/1365-2664.70389 · Open Access.

Education runs ahead of the evidence

96% of EAAF reserves use education as their primary response to human disturbance — but its evidence sits in "likely beneficial", not "beneficial"

Ng and colleagues surveyed 25 reserves across the East Asian–Australasian Flyway, asking each what conservation actions they deploy against five categories of threat. They then compared those actions to the Conservation Evidence database, which collates peer-reviewed and grey-literature evidence for what works.

Education was the most-deployed action in the entire dataset — used by 24 of 25 reserves to address human disturbance. The evidence behind it, however, sits in the "likely beneficial" category, not the firmer "beneficial" one. The authors are explicit: "Although conservation actions taken by different reserves did not have any negative effects, they were only classified as 'likely to be beneficial' (not 'beneficial') in CE."

The pattern repeats across the panel of actions Ng et al. report for human disturbance. The most-deployed action sits on the weakest evidentiary ground. Several common actions — bird hides among them — are not in the Conservation Evidence database at all. That is the gap, and it carries through to the Australian flyway terminus.

Human disturbance: what's done vs. what's evidenced

The diagram below adapts panel (b) of Ng et al.'s Figure 3. Two actions from their original — "Bird-friendly agricultural practices" and the entirety of the hunting/by-catch panel — don't apply to the Moreton Bay context and have been dropped. The remaining seven actions are all deployed at Australian coastal sites in some form. The percentages stay attributed to Ng et al.'s EAAF survey (n=25 reserves). Connecting lines show which actions are supported by which evidence category in the Conservation Evidence database.

Conservation actions for human disturbance vs evidence base, adapted from Ng et al. 2026 Diagram showing seven conservation actions on the left (deployment percentages from Ng et al. EAAF survey) and four evidence categories on the right. Trail design, establish restricted area, and signage map to Restricted access (Beneficial). Education and cooperate-with-locals map to Public campaign / education program (Likely beneficial). Bird hides and visitor-number controls map to Not in CE (no evidence). Conservation Actions (% of EAAF reserves deploying, Ng et al. 2026) Conservation Evidence (strength in CE database) 52% Trail design 36% Establish restricted area 12% Signs for restricted area 96% Education 28% Cooperate with locals 52% Provide bird hide 12% Control visitor numbers Restricted access Beneficial — firm evidence Protect habitat with law Beneficial — firm evidence Public campaign / education program Likely beneficial — partial evidence Not mentioned in CE No evidence base in the database Adapted from Ng et al. (2026), Figure 3(b). Open Access. Two non-Australian actions (bird-friendly agriculture, hunting) dropped.

Diagram adapted from Ng et al. (2026), Journal of Applied Ecology 63, e70389, Figure 3(b), under the article's Creative Commons Open Access licence. View the original.

Reading the diagram

BeneficialPublished evidence supports the action's effect on waterbird outcomes.
Likely beneficialEvidence exists but is partial. The action may work; the case isn't closed.
Not in CE databaseThe action is deployed widely but the evidence base has not been collated or assessed.
Deployment barThe shaded portion of each action box shows the % of EAAF reserves using it (Ng et al. survey, n=25).

The Australian sample in Ng et al. is n=1. Their survey reached 25 reserves across 15 EAAF countries; Australia is represented by a single response. The authors flag this as a limitation. The flyway terminus — where most non-breeding shorebirds spend most of their year — is the most under-represented country in the dataset that documents the global evidence gap.

What the gap looks like at the Australian flyway terminus

Ng et al. don't list these questions explicitly, but their discussion implies each one. They translate to the Australian context directly. Each is answerable, in principle, by site-scale records collected over time. None of the answers currently exist in any comprehensive form for Moreton Bay.

  1. Does education actually shift visitor behaviour at the waterline?

    Education is the dominant intervention. At Moreton Bay it shows up as visitor centres, interpretive signage, school programs, EEC partnerships, and Steward presence. But the evidence base for "public campaign / education program" is partial, and almost none of it comes from Australian sites. Whether a person reads a sign, hears a talk, or meets a Steward and then changes how they approach the waterline is the empirical question.

    From Ng et al.: "the actions were only classified as 'likely to be beneficial' (not 'beneficial') in CE."
  2. Are passive measures — trail design, signs, restricted areas — sufficient as visitor pressure rises?

    These three actions sit in the "Beneficial" evidence category and account for most of the firmer-evidenced human-disturbance response. They work where visitor numbers are stable. Moreton Bay's coastal population grew through the 2010s and 2020s, and is projected to keep growing. Whether a system designed for an earlier intensity of visitation still holds at current densities is testable, not assumed.

    From Ng et al.: "With increased human density and conflict with waterbirds along the EAAF, relying solely on passive measures may not be sufficient, especially given that their benefits are only considered likely or non-existent."
  3. Are bird hides actually neutral, or are they themselves disturbance vectors?

    Bird hides are deployed at over half of EAAF reserves. They're a feature of every major roost site around Moreton Bay: Kakadu Beach, Toorbul, Buckley's Hole, Manly, Lytton, Boondall. None of the evidence base for hides sits in CE. Ng cites Ma et al. (2025) showing that visitor behaviour inside hides — talking, opening windows, scopes pointed through gaps — can itself disturb birds. The hide may be the disturbance.

    From Ng et al.: "a recent study has shown that different users' behaviour in bird hides can cause varying levels of disturbance to waterbirds… The very act of installing more bird hides to accommodate visitors can, paradoxically, attract even larger crowds."
  4. What about constructed roosts — Kakadu Beach in particular?

    Engineered roosts above HAT are an Australian specialty. Kakadu Beach was constructed in 1995; Empire Point High Tide Roost followed in 1995. These sites carry peak counts of 2,500 birds in season. They are major infrastructure-scale interventions and they don't appear in CE's bird-conservation taxonomy at all. Whether they sustain the species they were built for, at what cost in ongoing maintenance, and under what site conditions they retain function are questions the evidence base is silent on.

    Implied by Ng et al.'s analysis — the action type is not in the CE database in any form.
  5. Where is regionally-relevant evidence missing, and what kind of records would close that gap?

    Ng et al.'s final answer is the field's first task: build the regionally-relevant evidence. Their three recommendations close the paper: prioritise local research, enhance researcher–practitioner communication, expand the evidence database with relevant spatial and taxonomic representation. The Australian sample gap (n=1) marks where the field needs the most. The kind of records that close the gap are exactly the kind a Steward at a Moreton Bay site is positioned to collect.

    From Ng et al.'s synthesis: "(1) prioritize regionally relevant research validating frontline conservation actions, (2) enhance communication between researchers and practitioners and (3) expand evidence databases with relevant spatial and taxonomic representation."

Stewards as evidence-makers

A Steward standing at the waterline of a Moreton Bay roost is positioned to do something that the Conservation Evidence database, as currently constituted, cannot: record what actually happens, at the species level, at a known site, across seasons and years. Not anecdote — record. Pseudonymous, time-stamped, geo-located, traceable, comparable across visits.

The three activities that train and structure that record — ShorelineWatch, FlagWatch and DuskWatch — are not informal observation. They are designed to feed the evidence base Ng et al. say is missing. Each captures different facets of what the diagram above leaves unanswered.

This isn't an alternative to the formal research the paper calls for. It runs alongside it. Stewards generate the practitioner-side record; researchers do the synthesis; the gap closes from both ends. Ng et al. name the gap. FSB is part of the closing.

What the data needs to capture

The three Steward activities map to three data streams. Together they cover the dimensions the questions above require: visitor behaviour at the waterline (SW), individual-bird biographies across visits (FW), and cohort-validated observation of a specific high-information event (DW). Each card below lists the minimum fields that turn a Steward's observation into something that can answer one of the open questions.

Public-tier solo activity

ShorelineWatch

Every observer can do it. Captures the waterline encounter — birds, site, and any disturbance events that happen during the session.

  • Pseudonym (deterministic, no PII)
  • Date · time · tide state
  • Site identifier (FSB ID, e.g. KKBC, TOOR)
  • Rough bird count, by species or group
  • Disturbance event type (dog, human, drone, vehicle, boat, hide-visitor activity)
  • Estimated distance from birds at disturbance trigger
  • Bird response (no response · alert · partial flush · full flush · displaced)
  • Recovery time (minutes to settle, or didn't return)
  • Site condition note (substrate, mangrove, freshwater fringe, signage present)
Closes evidence gap onWhether education and signage actually reduce disturbance events. Real visitor-behaviour baseline at sites with and without specific interventions.
Trained Steward solo activity

FlagWatch

Identifies and records individual flagged birds. Records sync to AWSG; the bird's life story comes back to the Steward by email.

  • Steward pseudonym
  • Date · time · site
  • Flag colour and engraving code
  • Species (inferred from flag colour scheme)
  • Body condition score (thin · fair · good · pre-departure fat)
  • Behaviour at observation (resting · feeding · preening · alert · displaced)
  • Distance from nearest disturbance source
  • Repeat-sighting comparison to previous record of this individual
  • Free notes
Closes evidence gap onWhether bird outcomes — body condition, site fidelity, return rates — track with the interventions in place at the sites the bird uses. The biography is the evidence.
Steward cohort activity

DuskWatch

A cohort observation of Terek Sandpiper roosts as the in-coming dusk arrives. Multiple eyes, cross-checked. Citizen Science in action.

  • Cohort group, leader Steward
  • Site (Terek roost identifier)
  • Date · sunset time · in-coming start · full-settlement time
  • Tide state · weather note
  • Count of birds settling
  • Disturbance events during settlement window
  • Cohort observations (timing variation, settle behaviour, late arrivals)
  • Photo or audio note if useful
Closes evidence gap onSite fidelity and timing variation at a specific, defensible roost event. Cohort observation also generates Steward training quality data.

Each of these streams populates a sheet. The sheets join on site identifier and date. Together they produce something the Conservation Evidence database currently lacks for the EAAF terminus: a longitudinal, regionally-relevant, site-scale record of what's deployed, what's happening to birds, and how the two relate. That is what Ng et al. say is missing. That is what Stewards build.

Source paper, in full

Cite as

Ng, S. O., Sung, Y. H., Yu, Y. T., Tsang, T. P. N. & Lee, R. H. (2026). A call for evidence-based conservation: Securing the future of waterbirds along the East Asian–Australasian flyway. Journal of Applied Ecology, 63, e70389. https://doi.org/10.1111/1365-2664.70389

Published Open Access under a Creative Commons licence by the British Ecological Society and John Wiley & Sons. The diagram on this page is adapted from Figure 3(b) of the source paper. The framework, the survey data, and the analytical approach are theirs. The Australian application, the questions phrased in this register, and the FSB tool-mapping are ours.

For the underlying data, see the figshare deposit: Ng, S. O. & Lee, R. H. (2026). Data from: A call for evidence-based conservation. figshare. https://doi.org/10.6084/m9.figshare.31978047