Skip to contents

This vignette walks through the crossing interpretation pipeline using the Morice watershed group (MORR) as an example. The same steps apply to any watershed group in the province.

Data generated by data-raw/vignette_morr.R. Sources: crossings from fresh CSV, PSCIS assessments from BC Data Catalogue via bcdata, override CSVs from bcfishpass. No database required.

fresh is a generic stream network engine. It segments streams at break points, classifies habitat by species thresholds, and rolls up results. But fresh doesn’t know about crossings, severity, or field assessments — that’s link’s job.

link (interpret, score crossings) --> break source spec --> fresh (segment, classify habitat)

link reads raw crossing data, applies field corrections, scores severity, and produces a break_sources spec that plugs directly into frs_habitat(). Confirmed barriers break the network geometry. Unassessed crossings are features registered on the network for downstream indexing (see fresh #92, #93).

The problem with binary classification

Provincial crossing assessments produce BARRIER or PASSABLE. This tells you nothing about how bad a barrier is or what habitat it blocks.

morr_raw <- readRDS(system.file(
  "testdata", "morr_crossings_raw.rds", package = "link"))

nrow(morr_raw)
#> [1] 1599
table(morr_raw$crossing_source)
#> 
#>               CABD MODELLED CROSSINGS              PSCIS 
#>                  3               1424                172
table(morr_raw$barrier_status)
#> 
#>             BARRIER  PASSABLE POTENTIAL   UNKNOWN 
#>         1        86       116      1383        13

MORR has 1599 crossings — 1424 modelled, 172 with PSCIS field assessments. Most modelled crossings have no field data.

Override corrections

bcfishpass maintains correction CSVs — tens of thousands of hand-reviewed crossings accumulated across field seasons and imagery review.

morr_fixes <- readRDS(system.file(
  "testdata", "morr_modelled_fixes.rds", package = "link"))

nrow(morr_fixes)
#> [1] 947
table(morr_fixes$structure)
#> 
#>      FORD NONE  OBS 
#>  718    4  162   63

For MORR: 947 modelled crossing fixes. Most reclassify crossings as empty (no structure found on imagery) or open-bottom structures (passable).

head(morr_fixes[, c("modelled_crossing_id", "structure",
                     "reviewer_name", "source")], 8)
#>     modelled_crossing_id structure reviewer_name                   source
#> 827             14001728      NONE            OB Bing/Google/ESRI imagery
#> 829             14001727      NONE            OB Bing/Google/ESRI imagery
#> 830             14001726      NONE            OB Bing/Google/ESRI imagery
#> 831             14001724      NONE            OB Bing/Google/ESRI imagery
#> 833             14001721       OBS            OB Bing/Google/ESRI imagery
#> 834             14001714      NONE            OB Bing/Google/ESRI imagery
#> 835             14001698      NONE            OB Bing/Google/ESRI imagery
#> 836             14001697      NONE            OB Bing/Google/ESRI imagery

The link override pipeline loads, validates, and applies these corrections:

conn <- lnk_db_conn()

# Load --- validates CSV structure before writing to DB
lnk_override_load(conn,
  csv  = "user_modelled_crossing_fixes_morr.csv",
  to   = "working.morr_modelled_fixes",
  cols_id = "modelled_crossing_id",
  cols_required = c("structure"),
  cols_provenance = c("reviewer_name", "review_date", "source"))

# Validate --- find orphans and duplicates
lnk_override_validate(conn,
  overrides = "working.morr_modelled_fixes",
  crossings = "working.morr_crossings")

# Apply --- auto-detects which columns to update
lnk_override_apply(conn,
  crossings = "working.morr_crossings",
  overrides = "working.morr_modelled_fixes")

PSCIS matching

PSCIS assessments have field measurements — outlet drop, culvert slope, length, channel width. The crossings table from fresh has network position. Matching links the measurements to the network so severity scoring uses real data.

morr_pscis <- readRDS(system.file(
  "testdata", "morr_pscis.rds", package = "link"))
morr_xref <- readRDS(system.file(
  "testdata", "morr_xref.rds", package = "link"))

nrow(morr_pscis)
#> [1] 169
nrow(morr_xref)
#> [1] 8

The 8 xref corrections are hand-curated GPS error fixes from field work. These take priority over spatial matching.

lnk_match_pscis(conn,
  crossings = "working.morr_crossings",
  pscis     = "working.morr_pscis",
  xref_csv  = "pscis_modelledcrossings_streams_xref_morr.csv",
  to        = "working.morr_matched")

PSCIS barrier status overrides

morr_pscis_fixes <- readRDS(system.file(
  "testdata", "morr_pscis_status_fixes.rds", package = "link"))

if (nrow(morr_pscis_fixes) > 0) {
  morr_pscis_fixes[, c("stream_crossing_id",
                        "user_barrier_status", "notes")]
}
#>      stream_crossing_id user_barrier_status                  notes
#> 1295             197962            PASSABLE Bridge; Imagery Review

One MORR crossing has a barrier status override — confirmed PASSABLE by imagery review.

Severity scoring

Default thresholds classify crossings by biological impact:

Severity Criteria Interpretation
High outlet_drop >= 0.6m OR slope x length >= 120 Impassable at most flows
Moderate outlet_drop >= 0.3m OR slope x length >= 60 Flow-dependent
Low everything else Likely passable
lnk_score_severity(conn, "working.morr_crossings")
morr_dist <- readRDS(system.file(
  "testdata", "morr_severity_dist.rds", package = "link"))
morr_dist
#> 
#>     high      low moderate     <NA> 
#>       11       73       19       66
morr_scored <- readRDS(system.file(
  "testdata", "morr_pscis_scored.rds", package = "link"))

# The reveal: crossings with PSCIS measurements scored by severity
scored_with_data <- morr_scored[!is.na(morr_scored$severity), ]
head(scored_with_data[order(-scored_with_data$outlet_drop),
  c("stream_crossing_id", "barrier_result_code",
    "outlet_drop", "culvert_slope",
    "severity")], 10)
#>     stream_crossing_id barrier_result_code outlet_drop culvert_slope severity
#> 122             198039             BARRIER        1.60           5.0     high
#> 157             198085             BARRIER        1.60           5.0     high
#> 9               197367             BARRIER        1.57           5.0     high
#> 130             198056             BARRIER        1.20           4.0     high
#> 58              197951             BARRIER        1.10           4.5     high
#> 119             198036             BARRIER        1.10           6.0     high
#> 94              198009             BARRIER        0.80           5.0     high
#> 163             198936             BARRIER        0.75           2.0     high
#> 51              197944             BARRIER        0.70           4.5     high
#> 161             198934             BARRIER        0.70           4.0     high

The binary barrier_result_code treats these crossings identically. The severity column reveals the gradient — from impassable drops to likely passable culverts.

Handing off to fresh

lnk_break_source() produces the spec that plugs directly into fresh’s frs_habitat(). Link’s severity labels translate to fresh’s access labels via label_map:

  • high severity –> "blocked" (confirmed barrier, breaks geometry)
  • moderate severity –> "potential" (unconfirmed, may break geometry)
morr_spec <- readRDS(system.file(
  "testdata", "morr_break_source.rds", package = "link"))
str(morr_spec)
#> List of 3
#>  $ table    : chr "working.morr_crossings"
#>  $ label_col: chr "severity"
#>  $ label_map: Named chr [1:2] "blocked" "potential"
#>   ..- attr(*, "names")= chr [1:2] "high" "moderate"
src <- lnk_break_source(conn, "working.morr_crossings")

# The handoff --- fresh takes it from here
fresh::frs_habitat(conn, "MORR", break_sources = list(src))

# After fresh classifies habitat, link can roll up per crossing
lnk_habitat_upstream(conn, "working.morr_crossings",
  "fresh.streams_habitat")

Scaling to any watershed group

The same steps work for any watershed_group_code. The override CSVs have watershed_group_code columns for filtering. The crossings CSV from fresh covers the entire province.

crossings <- read.csv(system.file("extdata", "crossings.csv",
                                   package = "fresh"))
wsgs <- unique(crossings$watershed_group_code)

for (wsg in wsgs) {
  # 1. Filter crossings to WSG
  # 2. Load and apply overrides (filtered to WSG)
  # 3. Fetch PSCIS from bcdata, match with lnk_match_pscis()
  # 4. lnk_score_severity()
  # 5. lnk_break_source() -> frs_habitat()
}