Computer Vision on RealWear

Demand for computer vision technology has been steadily growing over the last decade: there are currently over 85,000 LinkedIn job listings related to computer vision in the USA.


Part of the growth of this technology has been its usefulness for automotive manufacturers who seek to enable autonomous driving — think Tesla’s Autopilot or Ford’s Argo AI, or even aftermarket solutions like Silicon Valley-based Waymo.

But computer vision has been adopted more widely too, with manufacturers rolling out computer vision-based solutions for use cases such as defect detection, visual inspection, parts counting and picking. This post is an introduction to computer vision in manufacturing contexts and a brief exploration of use cases that can be enabled through wearable RealWear devices.

What is computer vision?

Computer vision is a type of AI focused on interpreting images or videos typically for the purpose of identifying objects and their relative positions to the same level of accuracy as a human is capable of (or greater).

While general models do exist, computer vision models typically need to be trained per application: A computer will not automatically recognize objects, it needs to be trained on what to look for and be told what it is seeing.

Computer vision models typically learn through pattern recognition. As an example, feeding a computer vision algorithm thousands of images of a lawn mower engine with a missing fuel cap will allow it to identify the patterns in which pixels typically appear when that type of mower engine is missing a fuel cap. Replace the fuel cap and the pixels won’t follow the same patterns: the model will output that it’s not looking at a mower with a missing fuel cap.

A computer vision model recognizing a missing fuel cap (Image credit: JourneyApps)

How is a computer vision model created?

Creating a computer vision model typically requires completing similar high-level steps, regardless of the use case that is being addressed with that model. These steps are often referred to as Machine Learning Operations (or MLOps, for short).

High Level MLOps Process
Step 1: Capture training and evaluation data
Step 2: Label training and evaluation data
Step 3: Set an evaluation threshold and run model training
Step 4: Evaluate model accuracy against evaluation data
Step 5: Deploy model and run continuous improvement MLOps

Each step has its own intricacies, best practices and variations depending on what you’d like to achieve. We’ve found the following to be widely applicable rules of thumb:

  • Training and evaluation data should be captured in the same conditions as those in which the model will be used – this means videos should contain similar visual “noise” and be shot in similar lighting conditions.
  • Labeling can be automated to a large extent: for example, AI-enabled feature tracking allows all frames in a video to be labeled after only a few frames are manually labeled.
  • Take advantage of data augmentation to create “synthetic” training data where data is copied with aspects like rotation, brightness, saturation and mirroring are changed.
  • Run analytics on model training to determine model accuracy and spot areas that require debugging.

Running computer vision on RealWear®

JourneyApps provides a solution which allows computer vision models to run on RealWear devices. These models run on-device, in real time. This means no internet connection is required for them to run – this is sometimes referred to as running computer vision on the “edge”.

As a head-mounted wearable, RealWear provides the unique ability to run computer vision models from the viewpoint of the wearer. This enables computer vision to augment the effectiveness of individuals wearing RealWear devices.

Computer vision running on a RealWear device (Credit: JourneyApps)

Computer vision use cases

The top use cases for deploying computer vision on RealWear devices include:

Visual Inspection (Including Defect Detection)
Many machinery manufacturers incorporate quality processes into assembly line responsibilities. Computer vision for presence/absence checks can assist associates who are required to quickly spot defects while also completing various assembly tasks.

Part Tracking / Counting
For certain manufacturing setups, such as engineered to order (ETO) manufacturing jobs, associates need to track all parts retrieved and used in the assembly process. Computer vision can provide automatic counting and logging of parts used, eliminating paperwork and errors and thereby saving cost and time.

Broader than manufacturing, picking and packing tasks take significantly less time when items picked are automatically logged through computer vision. This can include scanning multiple barcodes at the same time and integrating data into existing inventory systems.

MLOps Automation

A key part of implementing a computer vision solution is running effective MLOps. JourneyApps automates steps 3 through 5 discussed above, and provides automations for step 1: data capture and step 2: data labeling.

Data capture is the foundation of any computer vision project and as such, JourneyApps provides a solution which ensures that even teams new to using computer vision can collect high quality training data in a short period of time. This is achieved through a data capture workflow where project leaders set out instructions for what data needs to be captured – e.g. a video of a mower engine with a fuel cap missing shot under assembly line lighting.

Data, in most cases videos, are then captured through mobile devices or RealWear wearables and automatically saved to the cloud where it’s ready for labeling. Once a few frames have been labeled the training process runs automatically and a computer vision model is deployed to the edge, including RealWear and mobile devices.

Schedule a demo

To learn more about using computer vision in your operations, schedule a demo with one of our engineers here.

← Back to all posts


The development platform

for industrial apps

Try For Free