Client Success Story

Reducing False Positives (~55%) for a Security Solutions Provider with a Computer Vision Platform

2x

camera coverage/
operator

~55%

reduced false
positives

~70%

lower
bandwidth usage

Service

  • Data Annotation
  • Computer Vision

Platform

  • Computer Vision Models (YOLO)
  • AWS Cloud
The client

A Global Security Solutions Provider

Our client is a Europe-based provider that offers managed security and monitoring solutions for extensive commercial facilities. They operate a 24x7 operations center where more than 22,000 cameras across 180 client sites (including corporate campuses, logistics hubs, and supply chain facilities) are constantly supervised.

THEIR CHALLENGE

Enhance Tracking Accuracy and Scale Real-Time Security Monitoring Without a Proportional Headcount Increase

As the company scaled its portfolio by adding more sites and cameras, its existing security management mode began to fall short because of:

  • High Manual Monitoring Overhead: Each operator had to monitor 25–30 camera feeds at once, which led to fatigue, missed incidents, and inconsistent response.
  • Network Constraints and Storage Limitations: Limited uplink bandwidth and legacy NVRs (Network Video Recorders), so when bandwidth dropped, video quality was downgraded or buffered, reducing visual clarity and undermining analytics accuracy.
  • Difficulty Standardizing Analytics across Different Sites: Differences in camera vendors, resolutions, mounting heights, angles, and lighting meant that rules or models tuned for one facility often failed in another.
  • Increasing False Positives: Lighting changes, reflections, and routine staff movement were frequently flagged as “suspicious,” so operators spent nearly 60 percent of their time clearing false alerts instead of focusing on genuine security events.
THE REQUIREMENT

A Computer Vision-Based Security Solution

The client wanted us to build a custom CV solution that would:

  • Reduce dependence on manual monitoring
  • Standardize security analytics
  • Support a portfolio of more complex security use cases: perimeter breach, loitering, tailgating, PPE compliance, crowding at exits, and abandoned objects.
  • Run inference close to the camera (edge devices)
  • Improve alert precision and recall
OUR SOLUTION

A CV-Powered Security Analytics Platform for Real-Time, Cross-Site Monitoring

We designed a multi-tenant computer vision security platform to automate real-time monitoring, filter out false positives, and standardize analytics across heterogeneous sites. The platform runs edge-based inference to operate within limited bandwidth and storage while processing data locally to achieve minimum latency.

Workflow of the solution

Workflow of the solution
1

Discovery and Evaluation

We started with a 4-week discovery phase:

  • Worked closely with operations leads and three representative end customers to map high-priority security scenarios and current incident patterns.
  • Analyzed 90 days of historical incident logs to understand where manual monitoring failed.
  • Selected a representative subset of 400 cameras across 12 sites, covering indoor corridors, loading bays, parking lots, lobbies, and high-value zones, to serve as the initial training and validation cohort.
2

Data Preparation and Annotation

The client also had petabytes of archived footage. However, it was not accurately annotated for model training. So we compiled all the data and set up a data pipeline for the following:

  • Sampling videos for typical activity and edge cases (night shifts, rain, varying lighting).
  • Extracting both positive and negative examples so the models could learn subtle distinctions, such as a visitor escorted by staff versus a genuine tailgating event.
  • Annotating the curated dataset to label:
    • Persons, vehicles, and objects
    • Specific PPE items (helmets, high visibility vests)
    • Zones of interest (restricted areas, exits, fence lines, parking slots)
    • Interaction labels such as “person entering secure zone,” “person without helmet in PPE zone,” “object left behind,” and “multiple persons through single access event”

All annotations went through a two-tier QA process to reach target agreement rates above 98 percent on key classes.

Data Preparation Annotation 1
Data Preparation Annotation 2
Data Preparation Annotation 3
3

CV Model and Use Case Design

Using the annotated datasets, we implemented a modular CV model stack with custom-trained models tailored to specific use cases.

  • Object and Person Detection: A customized object detection model (based on a modern YOLO variant).
  • Multi Camera Tracking: A tracking layer based on a YOLOv8 detector combined with a DeepSORT-style multi-object tracker to follow individuals across multiple cameras within a site.
  • Zone and Behavior Analytics: Custom rules and micro models on top of the YOLOv8 + DeepSORT stack to interpret behavior:
    • Perimeter breaches when a person crosses a virtual fence
    • Tailgating when the count of people entering exceeds the authorized number
    • Loitering based on dwell time thresholds in sensitive zones
    • Abandoned objects when an object remains in place after its associated person has left the frame
  • Anomaly Detection for Unknown Patterns: For areas with complex traffic patterns, we added an unsupervised anomaly detection component that learned “normal” motion flows over time and flagged deviations.
4

Edge Deployment and System Architecture Design

Given the client’s scale and latency requirements, we designed a hybrid edge cloud architecture:

  • Edge Inference Nodes: We deployed containerized inference services on NVIDIA Jetson class devices installed in each site’s network to ingest data streams from local cameras. Jetson was used to make use only structured events with low bitrate were passed on to the cloud.
  • Cloud Orchestration: We hosted the central security analytics platform on AWS, with:
    • Containerized services on Amazon EKS
    • Alert and configuration data stored in Amazon RDS/DynamoDB
    • Event media archived in Amazon S3.

All of these were exposed using secure APIs and web dashboards.

  • Resilience and Bandwidth Control: If cloud connectivity dropped, edge nodes continued to run detection and buffered critical events locally. We also configured for adaptive streaming and event batching to keep bandwidth consumption within desired limits.
5

Workflows and SOC (Security Operations Center) Experience Optimization

The goal was not only better detection but also better decision-making by human operators. The following was done for the same:

  • Each event (for example, “PPE violation in Bay 4”) was assigned a risk score based on time, zone criticality, and event type so operators could prioritize accordingly.
  • We implemented custom logic to merge related alerts, such as multiple detections of the same person loitering across adjacent cameras.
  • We configured the platform to generate periodic reports for end customers using Amazon QuickSight and scheduled exports (PDF/CSV) from Amazon S3/RDS.
6

Continuous Learning and Governance

To ensure reliability at scale, we implemented SageMaker-based MLOps pipelines with live performance monitoring through CloudWatch and Prometheus/Grafana. Missed incidents and false positives were fed back for re-annotation and periodic retraining, with updated models rolled out gradually. We also built industry-specific configuration templates so new sites could be onboarded quickly with baseline analytics tailored to their environment.

Technology Stack

Category

Tools & Technologies

Modeling & Computer Vision
  • Custom YOLO detector,
  • YOLOv8 + DeepSORT
Edge Inference
  • NVIDIA Jetson
Cloud Infrastructure (AWS)
  • Amazon EKS
  • Amazon RDS/DynamoDB
  • Amazon S3
Reporting & Analytics
  • Amazon QuickSight
MLOps & Monitoring
  • Amazon SageMaker
  • Amazon CloudWatch
  • Prometheus/Grafana
THE RESULT

Project Outcomes

2x camera coverage per operator without added headcount

~55% reduction in false positive alerts

~65% faster analytics setup for new facilities and camera layouts

~70% lower upstream bandwidth usage

CONTACT US

Looking to Build a Custom CV Solution for Your Business?

We build production-grade CV platforms that enhance detection accuracy, reduce manual workloads, and standardize analytics across multi-site environments. Contact us to learn more about our computer vision services.