LogoLogo
WebsiteSupportDownloadsForumsQuick LinksContact Us
v2.3
v2.3
  • OptiTrack Support Documentation
  • WHAT'S NEW
    • Unreal Engine: OptiTrack InCamera VFX
  • QUICK START GUIDES
    • Quick Start Guide: Getting Started
    • Quick Start Guide: Precision Capture
    • Quick Start Guide: Tutorial Videos
    • Quick Start Guide: Prime Color Setup
    • Quick Start Guide: Active Marker Tracking
    • Quick Start Guide: Outdoor Tracking Setup
  • HARDWARE
    • Cameras
      • Ethernet Cameras
        • PrimeX 41
        • PrimeX 22
        • PrimeX 13
        • PrimeX 13W
        • SlimX 13
        • Prime Color
      • USB Cameras
        • Slim 3U
        • Flex 13
        • Flex 3
        • V120:Duo
        • V120:Trio
        • V120:Duo and Trio Setup
        • Adjusting Global Origin for Tracking Bars
    • Prepare Setup Area
    • Camera Mount Structures
    • Camera Placement
    • Camera Network Setup
    • Aiming and Focusing
    • Camera Status Indicators
  • MOTIVE
    • Installation and Activation
    • Motive Basics
    • Calibration
      • Continuous Calibration
      • Calibration Squares
    • Markers
    • Assets
      • Gizmo Tool: Translate, Rotate, and Scale
    • Rigid Body Tracking
      • Aligning Rigid Body Pivot Point with a Replicated 3D Model
    • Skeleton Tracking
    • Data Recording
      • Data Types
    • Labeling
    • Data Editing
    • Data Export
      • Data Export: BVH
      • Data Export: C3D
      • Data Export: CSV
      • Data Export: FBX
      • Data Export: TRC
    • Data Streaming
    • Camera Video Types
    • Audio Recording
    • Motive HotKeys
    • Measurement Probe Kit Guide
    • Motive Batch Processor
    • Reconstruction and 2D Mode
    • Tracking Bar Coordinate System
      • Transforming Coordinate System: Global to Local
  • MOTIVE UI PANES
    • Application Settings
      • Settings: Live Reconstruction
      • Settings: General
      • Settings: Views
      • Settings: Assets
        • Skeletons
        • Rigid Body
      • Settings: Camera
    • Mouse and Keyboard
    • Assets Pane
    • Builder Pane
    • Calibration Pane
    • Control Deck
    • Data Pane
    • Data Streaming Pane
    • Devices Pane
    • Edit Tools Pane
    • Graph View Pane
    • Info Pane
    • Labels Pane
    • Log Pane
    • Marker Sets Pane
      • Marker Name XML Files
    • Measurement Pane
    • Probe Pane
    • Properties Pane
      • Properties Pane: Camera
      • Properties Pane: Force Plates
      • Properties Pane: NI-DAQ
      • Properties Pane: OptiHub2
      • Properties Pane: Rigid Body
      • Properties Pane: Skeleton
      • Properties Pane: Take
      • Properties Pane: eSync2
    • Reference View pane
    • Status Panel
    • Toolbar/Command Bar
    • Viewport
  • PLUGINS
    • OptiTrack Unreal Engine Plugin
      • Unreal Engine: OptiTrack Live Link Plugin
      • Unreal Engine: OptiTrack Streaming Client Plugin
      • Unreal Engine: HMD Setup
      • Unreal Engine: MotionBuilder Workflow
      • Unreal Engine VCS Inputs
    • OptiTrack Unity Plugin
      • Unity: HMD Setup
    • OptiTrack OpenVR Driver
    • Autodesk Maya
      • Autodesk Maya: OptiTrack Insight VCS Plugin
    • Autodesk MotionBuilder
      • Autodesk MotionBuilder Plugin
      • Autodesk MotionBuilder: OptiTrack Skeleton Plugin
      • Autodesk MotionBuilder: OptiTrack Optical Plugin
      • Autodesk MotionBuilder: OptiTrack Insight VCS Plugin
      • Autodesk MotionBuilder: Timecode Data
    • OptiTrack Peripheral API
    • External Plugins
      • Houdini 19 Integration
  • ACTIVE COMPONENTS
    • Active Components Hardware
      • Active Puck
      • CinePuck
      • BaseStation
      • Information for Assembling the Active Tags
      • Manus Glove Setup
    • Configuration
      • Active Batch Programmer
      • Active Hardware Configuration: PuTTY
      • Active Component Firmware Compatibility
    • Active Marker Tracking
      • Active Finger Marker Set
      • Active Marker Tracking: IMU Setup
  • SYNCHRONIZATION
    • Synchronization Hardware
      • External Device Sync Guide: eSync 2
      • External Device Sync Guide: OptiHub2
    • Synchronization Setup
    • OptiTrack Timecode
  • VIRTUAL PRODUCTION
    • Unreal Engine: OptiTrack InCamera VFX
    • Entertainment Marker Sets
    • PrimeX 41
  • MOVEMENT SCIENCES
    • Movement Sciences Hardware
      • General Motive Force Plate Setup
      • AMTI Force Plate Setup
      • Bertec Force Plate Setup
      • Kistler Force Plate Setup
      • Delsys EMG Setup
      • NI-DAQ Setup
      • Multiple Device Setup
      • Prime Color Setup
    • Movement Sciences Marker Sets
      • Biomechanics Marker Sets
      • Biomech (57)
      • Rizzoli Marker Sets
    • For Visual3D Users
  • VIRTUAL REALITY
    • VR Plugins
      • VR Unreal Engine
        • OptiTrack Unreal Engine Plugin
        • Unreal Engine: OptiTrack Live Link Plugin
        • Unreal Engine: OptiTrack Streaming Client Plugin
        • Unreal Engine VCS Inputs
      • VR Unity
        • OptiTrack Unity Plugin
      • VR OpenVR
        • OptiTrack OpenVR Driver
    • VR HMD Setup
      • Unreal Engine: HMD Setup
      • Unity: HMD Setup
      • Manually Calibrating the HMD Pivot Point
      • Sync Configuration with an HTC Vive System
    • Navigation Controller Guide
    • SlimX 13
    • Active Marker Tracking
      • Active Finger Marker Set
      • Active Marker Tracking: IMU Setup
    • Synchronization Hardware
      • External Device Sync Guide: eSync 2
      • External Device Sync Guide: OptiHub2
  • ANIMATION
    • Autodesk Maya
      • Autodesk Maya: OptiTrack Insight VCS Plugin
    • Autodesk MotionBuilder
      • Autodesk MotionBuilder Plugin
      • Autodesk MotionBuilder: OptiTrack Skeleton Plugin
      • Autodesk MotionBuilder: OptiTrack Optical Plugin
      • Autodesk MotionBuilder: OptiTrack Insight VCS Plugin
      • Autodesk MotionBuilder: Timecode Data
  • ROBOTICS
    • PrimeX 22
    • Outdoor Tracking Setup
  • DEVELOPER TOOLS
    • Developer Tools Overview
    • NatNet SDK
      • NatNet 4.0
      • NatNet: Class/Function Reference
      • NatNet: Creating a Managed (C sharp) Client Application
      • NatNet: Creating a Native (C++) Client Application
      • NatNet: Data Types
      • NatNet: Matlab Wrapper
      • NatNet: Migration to NatNet 3.0 libraries
      • NatNet: RebroadcastMotiveData Sample
      • NatNet: Remote Requests/Commands
      • NatNet: Sample Projects
      • NatNet: Unicast Data Subscription Commands
      • Latency Measurements
    • Motive API
      • Motive API: Quick Start Guide
      • Motive API Overview
      • Motive API: Function Reference
      • Motive API Camera Calibration
    • Camera SDK
      • Camera SDK Classes
        • Class: cCameraGroupFilterSettings
        • Class: cCameraGroupMarkerSizeSettings
        • Class: cCameraGroupPointCloudSettings
        • Class: cCameraModule
        • Class: cRigidBodySettings
        • Class: cRigidBodySolutionTest
        • Class: cTTAPIListener
        • Class: cUID
  • MARKER SETS
    • Full Body
      • Baseline (37)
      • Baseline + Hinged Toe (41)
      • Baseline + Hinged Toe, with Headband (41)
      • Baseline + 13 Additional Markers (50)
      • Biomech (57)
      • Conventional (39)
    • Full Body + Fingers
      • Baseline + Hinged Toe + Fingers (49)
      • Baseline + 11 Additional Markers + Fingers (54)
      • Manus Glove Setup
    • Upper
      • Baseline Upper (25)
      • Baseline Upper Body + Fingers (33)
      • Conventional Upper (27)
    • Lower
      • Baseline Lower (20)
      • Helen Hayes Lower (19)
      • Conventional Lower (16)
    • Hand and Fingers
      • Left and Right Hand (11)
      • Active Finger Marker Set
    • Rizzoli Marker Sets
    • Entertainment Marker Sets
    • Rigid Body Skeleton Marker Set
  • GENERAL TROUBLESHOOTING
    • Troubleshooting
    • Running Motive on High DPI Displays
    • Firewall Settings
Powered by GitBook
On this page
  • Reconstruction: Basic Concept
  • Camera Settings
  • Enable Reconstruction
  • Threshold Setting
  • Live Pipeline Settings
  • Camera Filter - Software
  • Marker Rays
  • Software Filter: Solver Settings
  • Real-time Solve
  • Live Mode
  • 2D Mode
  • Post-Processing Reconstruction

Was this helpful?

Export as PDF
  1. MOTIVE

Reconstruction and 2D Mode

PreviousMotive Batch ProcessorNextTracking Bar Coordinate System

Last updated 2 years ago

Was this helpful?

This page provides an explanation on some of the settings that affect how the 3D tracking data is obtained. Most of the related settings can be found under the Live Pipeline tab in the . A basic understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. With that being said, we do not recommend changing these settings as the default settings should work well for most tracking applications.

Reconstruction: Basic Concept

  • THR setting under camera properties

Reconstruction is a process of deriving 3D points from 2D coordinates obtained by captured camera images. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated on each captured frame and processed through the solver pipeline in order to be tracked. This process involves trajectorization of detected 3D markers within the calibrated capture volume and the booting process for the tracking of defined assets.

For real-time tracking in Live mode, the settings for this pipeline can be configured from the Live-Pipeline tab in the . For post-processing recorded files in Edit mode, the solver settings can be accessed under corresponding . Note that optimal configurations may vary depending on capture applications and environmental conditions, but for most common applications, default settings should work well.

In this page, we will focus on the and the , which are the key settings that have direct effects on the reconstruction outcome.

Camera Settings

Enable Reconstruction

  • To oscillate between camera video types in Motive, click the camera video type icon under Mode in the Devices pane.

Threshold Setting

We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data.

Live Pipeline Settings

Camera Filter - Software

When a frame of image is captured by a camera, the 2D camera filter is applied. This filter works by judging on the sizes and shapes of the detected reflections or IR illuminations, and it determines which ones can be accepted as markers. Please note that the camera filter settings can be configured in Live mode only because this filter is applied at the hardware level when the 2D frames are first captured. Thus, you will not be able to modify these settings on a recorded Take as the 2D data has already been filtered and saved; however, when needed, you can increase the threshold on the filtered 2D data and perform post-processing reconstruction to recalculate 3D data from the 2D data.

Min/Max Thresholded Pixels

The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.

For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.

Circularity

Object mode vs. Precision Mode

Marker Rays

Tracked Ray (Green)

Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.

Untracked Ray (Red)

An untracked ray is a marker ray that fails to contribute to a reconstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.

Software Filter: Solver Settings

Motive processes markers rays with the camera calibration to reconstruct respective markers, and the solver settings determine how 2D data gets trajectories and solved into 3D data for tracking the Rigid Bodies and/or Skeletons. The solver not only tracks from the marker rays but additionally utilizes pre-defined asset definitions to provide high-quality tracking. The default solver settings work for most tracking applications, and the users should not need to modify these settings. With that being said, some of the basic settings which can be modified are summarized below.

Minimum Rays to Start / Minimum Rays to Continue

This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.

More Settings

Real-time Solve

Motive performs real-time reconstruction of 3D coordinates directly from either captured or recorded 2D data. When Motive is live-processing the data, you can examine the marker rays from the viewport, inspect the Live-Pipeline settings, and optimize the 3D data acquisition.

There are two modes where Motive is reconstructing 3D data in real-time:

  • Live mode (Live 2D data capture)

  • 2D mode (Recorded 2D data)

Live Mode

2D Mode

The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When a capture is recorded in Motive, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the 3D data gets loaded first when a recorded Take file is opened.

Switching to 2D Mode

Applying changes to 3D data

Once the reconstruction/solver settings have been adjusted and optimized on recorded data, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.

Post-Processing Reconstruction

The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.

  • Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.

  • Also, for Takes involving Skeleton assets, if the Skeletons are never in well-trackable poses throughout the captured Take, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).

Camera settings can be configured under the . In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. For this reason, the camera lens must be focused on the tracking volume, and the settings should be configured so that the markers are clearly visible in each camera view. Thus, the camera settings, such as camera exposure and IR intensity values, must always be checked and optimized in each setup. The following sections highlight additional settings that are directly related to 3D reconstruction.

Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See page for more information.

The THR setting is located in the in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.

To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the .

The under application settings control the tracking quality in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main stages before getting reconstructed into 3D tracking. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.

Enable Marker Size under the visual aids () in the viewport to inspect which reflections are accepted, or omitted, by the size filter.

In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.

Enable Marker Circularity under the visual aids in the viewport to inspect which reflections are accepted, or omitted, by the circularity filter.

The and deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In precision mode, cameras send the pixel data that would have been used by object mode to Motive for processing. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about .

After the 2D camera filter has been applied, each of the 2D centroids captured by each camera forms a marker ray, which is basically a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume; from each calibrated camera. When a minimum required number of rays, as defined in the ) converge and intersect within the allowable maximum offset distance (defined by settings) trajectorization of a 3D marker occurs. Trajectorization is a process of using 2D data to calculate respective 3D marker trajectories in Motive.

Monitoring marker rays is an efficient way of inspecting reconstruction outcomes. The rays show up by default, but if not, they can be enabled for viewing under the visual aids options under the toolbar in . There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.

The Live Pipeline settings doesn't have to be modified for most tracking applications. There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the page or refer to the corresponding tooltips.

In the , Motive is Live processing the data from captured 2D frames to obtain 3D tracking data in real-time, and you can inspect and monitor the marker rays from the . Any changes to the Live Pipeline (Solver/Camera) settings under the will be reflected immediately in the Live mode.

Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once recording has been made. You can still, however, view and use the recorded 2D data to optimize the solver parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the .

In 2D Mode, Motive is reconstructing in real-time from recorded 2D data; using the reconstruction/solver settings that were configured in the at the time of recording; Settings are saved under the properties of the corresponding TAK file. Please note that reconstruction/solver settings from the get applied for post-processing, instead of the settings from the panel. When in 2D Mode while editing a TAK file, any changes to the reconstruction/solver settings under TAK properties will be reflected in how the 3D reconstructions are solved, in real-time.

Under the , click to access the menu options and check the 2D Mode option.

Performing post-processing reconstruction. To perform post-processing reconstruction, open the , select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.

Camera Filter Settings In Edit mode, 2D camera filters can still be modified from the tracking group properties in the . Modified filter settings will change which markers in the recorded 2D data gets processed through the Live Pipeline engine.

Solver/Reconstruction Settings When you perform post-processing reconstruction on a recorded Take(s), a new set of 3D data will be reconstructed from the filtered 2D camera data. In this step, the solver settings defined under corresponding Take properties in the will be used. Note that the reconstruction properties under the are for the Live capture systems only.

Reconstruct and Auto-label, will additionally apply the pipeline on the obtained 3D data and label any markers that associate with existing asset (Rigid Body or Skeleton) definitions. The auto-labeling pipeline will be explained more on the page.

Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the or in the . When nothing is selected, reconstruction will be applied to all frames.

Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the .

Devices pane
Camera Video Types
camera properties
Application Settings
Live Pipeline settings
Object Mode
Precision Mode
Video Types
Application Settings: Live Pipeline
Data pane
Application Settings
TAK properties
application settings
Data pane
Devices pane
Properties pane
Application Settings
Control Deck
Graph pane
Data pane
Application settings
Application Settings
Take properties
Live Pipeline settings
Camera Settings
Minimum Rays
Application Settings
Data pane
Labeling
Viewport16.png
ContextMenu dotdotdot.png
Viewport16.png
Viewport16.png
Marker Circularity
auto-labeling
Camera Preview
Camera Preview
3D viewport
3D viewport
The Reconstruction toggle can toggle cameras On/Off from contributing to real-time reconstruction.
Real-time reconstruction enabled under the application settings.
Analyzing pixel brightness values using the pixel inspector.
THR setting under camera properties.
2D Filter section of the cameras tab in the Application Settings.
Reflections accepted (white) or rejected (red) by the size filter.
Reflections accepted (white) or rejected (red) by the size filter.
Tracked rays and respective marker marked on the two of the cameras that are selected.
Untracked ray from a camera.
Respective marker, that's causing the untracked ray, shown in the camera view.
Reconstruction settings are located under the Live Pipeline tab in the Application Settings.
Performing Reconstruct and Auto-label pipeline on a selected TAK to obtain a new set of 3D data.
When applying post-processing reconstruction, the reconstruction settings configured under the Properties pane will be applied. Click image to enlarge.
Live Mode
3D Threshold