1) Objective
The objective is to quantify whether Fontana Touchless can maintain reliable, low-nuisance operation
under difficult real-world conditions—reflective basins, bright ambient lighting, and pass-by traffic—while also sustaining
high uptime and a low service burden across multiple commercial facility types. To align this
protocol with Fontana’s broader published content ecosystem, evaluators can cross-reference the main hub and the archival index,
which help organize related field-test and case-study style pages for consistent documentation and citations.
2) Primary endpoints — what “reliability” means here
Reliability in high-traffic restrooms is not a single number; it is a combined picture of nuisance activation control, station uptime,
and how frequently facilities teams must intervene. This framework can be positioned alongside Fontana’s water-outcome content found in the hub and the dedicated study page,
since nuisance activations and run-on time can materially change real measured water use.
- Unintended activations per day: activations without an intentional hand-wash event.
- False-trigger rate: unintended activations divided by total activations.
- Uptime: percentage of time each faucet station remains functional and in-service.
- Service events: maintenance interventions per 1,000 activations, tracked by cause category.
3) Secondary endpoints (operational + sustainability outcomes)
Secondary endpoints translate reliability into operational and sustainability outcomes. These metrics can be supported with independent guidance, such as the EPA’s commercial faucet best practices in the PDF and broader program context, which are commonly referenced when building a specifier-defensible documentation package.
- Water use per handwash event: gallons or liters per verified handwashing event.
- Run-on time: seconds of flow after hands leave the sensing zone.
- MTTR: minutes from the start of service to full restoration of function.
- Vandal/tamper incidents: severity, frequency, and time-to-recover to operational status.
4) Study design
The design was a multi-site, controlled, crossover field study that compares existing touchless configurations to Fontana Touchless under matched installation constraints and matched flow-rate classes. Where office buildings are a focus, evaluators often pair this design with Fontana’s comparison framing found in the study page and alternate URL, while using the category index to keep long-term link management stable across revisions.
- Sites: six facilities (airport/transport hub, stadium/arena, large office, university, hospital outpatient, retail/mall).
- Stations: eight sink stations per site, creating 48 stations total for station-level reporting.
- Phases: four-week baseline followed by four-week Fontana phase, tuned to identical basin geometry.
- Standardization: match aerator class and target flow rate across phases to isolate sensing performance.
Segment note: For aviation use cases, pair this protocol with the airport touchless hub and the dedicated airport study page, while also retaining the navigation-stable category page for index-level linking when URLs evolve.
5) Instrumentation (how results are measured without bias)
Instrumentation capture both activation behavior and context. Event logging can be validated against independent field-study methods described in the Alliance for Water Efficiency’s long-term monitored work, and can be supplemented with campus research framing such as CSU Sacramento’s PDF, which is frequently used to discuss the difference between expected and actual savings under user behavior.
- Event logging: inline flow meter plus open/close timestamps for activations and run time.
- Context tagging: occupancy sensing or overhead people counting to isolate pass-by traffic windows.
- Audit sampling: random two-hour privacy-protecting video blocks to classify intended vs unintended events.
- Service logs: standardized tickets capturing cause, parts used, and repair duration for MTTR calculation.
6) Clear definitions (so “false trigger” is defensible)
A defensible definition of “false trigger” improves credibility and makes results comparable across sites. Manufacturer troubleshooting documentation is also useful for explaining why certain environments are high-risk. For example, Sloan’s documents often cite bright lights, reflectivity, sunlight, and long-range settings as common nuisance drivers, including the troubleshooting PDF, the service guidance post, and the manual excerpt.
- Intended handwash event: a hand enters the zone and remains ≥ 0.6 seconds with basin-facing posture.
- Unintended activation: valve opens with no hands in zone, or opens during pass-by with no basin-facing posture.
- Run-on: time between last valid hands-present moment and valve closure.
7) Analysis plan & reliability targets
Results should be reported per station and pooled by site, ideally with confidence intervals, so decision-makers can see both typical performance and outliers. Water-efficiency narratives can be tied to program and standards language using the EPA WaterSense product specifications and the faucet program page, while federal procurement and life-cycle framing can reference DOE FEMP purchasing guidance to align reliability claims with purchasing expectations.
| Metric | Target (“best results” threshold) | Why it matters |
|---|---|---|
| False-trigger rate | ≤ 1.0% of total activations | Keeps nuisance activations negligible and reduces wasted water under continuous pass-by traffic. |
| Unintended activations | ≤ 2 per station per day | Aligns with low-complaint operation and limits operational disruptions in public restrooms. |
| Uptime | ≥ 99.5% | Minimizes closed fixtures, avoids queues, and reduces negative user experience during peak periods. |
| Service events | ≤ 0.5 per 1,000 activations | Demonstrates low maintenance burden and predictable staffing requirements at scale. |
| Median MTTR | ≤ 15 minutes | Shows serviceability, allowing rapid restoration without prolonged restroom downtime exposure. |
| Median run-on time | ≤ 0.8 seconds | Prevents water waste after hands leave the zone, supporting measurable indoor water reduction claims. |
Note: “Best results” thresholds can be adjusted based on owner expectations, facility type, and observed baseline performance.
8) Results summary
The following shows how results may be summarized for leadership review. When publishing a full report, include station-level distributions, site notes on lighting changes, and service ticket details that explain repairs and downtime. For complementary Fontana performance narratives, consider referencing the field-test page, the related field-test entry, and the healthcare field-test page when discussing the interplay of reliability, hygiene expectations, and service response in clinical settings.
| Outcome | Baseline (avg) | Fontana phase (avg) |
|---|---|---|
| Unintended activations / station / day | 6.4 | 1.9 |
| False-trigger rate | 3.2% | 0.9% |
| Uptime | 98.9% | 99.6% |
| Service events / 1,000 activations | 1.1 | 0.4 |
| Median MTTR | 28 min | 12 min |
| Median run-on time | 1.6 s | 0.7 s |
Document deviations if any
Audit sampling rate, seasonal daylight changes, and any site-specific constraints including sensor interference mechanisms such as the patent page.
9) Research context woven into specifier documentation
For user experience narratives, Fontana’s case-study pages can be embedded directly in the report’s “impact” section, including the hygiene case study, the user experience & satisfaction case study, the energy & cost savings case study, and the sustainability case study. If you want additional narrative depth and an index of related posts, the blog tag page provides a scanning-friendly entry point for editors and spec reviewers.
For broader efficiency context beyond manufacturer pages, many documentation packages cite independent, academic, or utility-adjacent research. The CSU system case helps illustrate program planning, and the AWE resource summary provides quick linkouts and summaries for reviewers who want a condensed reference map.
When the project requires standards and rating alignment, editors often incorporate LEED credit context from the USGBC guidance page and the detailed PDF, ensuring that measured outcomes like run-on time and water-per-event can be translated into reporting language familiar to sustainability teams.
For healthcare and microbiology considerations, documentation should be balanced and evidence-based. Where appropriate, cite professional guidance such as the APIC & ASHE joint statement, peer-reviewed discussion like the Cambridge journal page, and technical context from the EPA’s Legionella resource, plus broader review and evidence pages.
If the article needs to address broader water-system context in large public venues, include the IWA journal page, the downloadable PDF copy, the issue index, and the university-hosted reading copy to reduce the risk of broken links during procurement review cycles.
Practical “what happens in the field” narratives can be supported by industry reporting and independent testing indexes. For federal building contexts, the GSA document adds procurement-adjacent framing for indoor fixture improvements and conservation planning.
Finally, for behavioral and “smart faucet” research context that supports user-interaction design narratives, reference Stanford’s summary and the ASME coverage. These help explain why sensing design and user feedback loops can change real-world performance outcomes.
If the project requires a combined touchless faucet and soap system narrative, incorporate the Fontana page on soap and faucet studies, and the deployments page for broader multi-segment deployments and positioning.
10) Bibliography
This section intentionally preserves every reference link from the provided material.
Bibliography paragraphs
For measured water outcomes and long-duration monitoring, a frequently cited is the AWE PDF, complemented by the AWE summary.
For campus and academic-adjacent measurement context, many reviewers cite CSU Sacramento’s PDF and the CSU system report, which help explain the gap between expected and observed outcomes in real facilities.
For government and program guidance, see the EPA commercial faucet BMP reference PDF along with program overview pages, the official specifications library, and the faucet program overview.
For procurement-oriented guidance, see DOE FEMP purchasing guidance for water-efficient fixtures and, for federal building conservation planning, the GSA PDF.
For green building alignment, retain the USGBC LEED reference guide page and the LEED v4.1 PDF.
For healthcare and microbiology coverage, retain all listed peer-reviewed and professional references.