All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

QUICK START GUIDES

Quick Start Guide: Getting Started

Welcome to the Quick Start Guide: Getting Started!

This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.

Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.

Hardware Setup

Preparing the Capture Area

For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.

Key Checkpoints for a Good Capture Area

  • Minimize ambient lights, especially sunlight and other infrared light sources.

  • Clean capture volume. Remove unnecessary obstacles within the area.

  • Tape, or Cover, remaining reflective objects in the area.

See Also: workflow pages.

Cabling and Load Balancing

Ethernet Camera System

Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.

  • Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.

  • Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.

  • USB Cables: Keep USB cable length restrictions in mind, each USB 2.0 cable must not exceed 5 meters in length.

  • Connect the OptiHub(s) into a Host PC: Use USB 2.0 cables (type A/B) to connect each OptiHub into a host PC. To optimize available bandwidth, evenly split the OptiHub connections between different USB adapters of the host PC. For large system setups, up to two 5 meters active USB extensions can be used for connecting an OptiHub, providing total 15 meters in length.

See Also: page.

Placing and Aiming Cameras

Optical motion capture systems utilize multiple 2D images from each camera to compute, or , corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require . Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.

See Also: and pages.

Lens Focus

In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.

To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.

See Also: page.

Software Setup

Host PC Requirements

In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our , or use the on our website to find out host PC specification requirements.

Motive Installation

Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.

If you are new to Motive, we recommend you to read through page after going through this guide to learn about basic navigation controls in Motive.

Motive Activation Requirements

The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.

  • Motive 2.x license

  • USB Hardware Key

Host PC Requirements

Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.

Recommended
Minimum

Download and Install

To install Motive, simply download the Motive software installer for your operating system from the , then run the installer and follow its prompts.

Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.

Install Requirements

The first time Motive 2.3.x is installed on a computer, the following software also needs to be installed:

  • Microsoft Visual C++ Redistributables 2013 and 2015

  • Microsoft DirectX 9c

  • OptiTrack USB Drivers

It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.

License Activation Steps

  1. Insert the USB Hardware Key into a USB-A port on the computer. If needed, you can also use a USB-A adapter to connect.

  2. Launch Motive

  3. Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.

Notes on using USB Hardware Key

  • When connecting the USB Hardware Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.

First Launch

When you first launch Motive, the Quick Start panel will show up, and you can use this panel to quickly get started on specific tasks. By default, Motive will start on the Calibration Layout. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. Note that the initial layout may be slightly different for different camera models or software licenses.

The following table briefly explains purposes of some of the panels on the initial layout:

UI Name
Description

See Also: List of UI pages from the section of the wiki.

Viewport Navigation

Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse and controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.

Function
Default Control

Camera Settings

Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the page.

To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the button for supported cameras. In Motive, you can right-click on the and switch the video mode in the context menu, or you can also change the video mode through the .

Exposure Setting

The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.

System Calibration

Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.

In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the . Read through the page to learn about the calibration process and what other tools are available for more efficient workflows.

See Also: page.

Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the in Motive.

Calibration Steps

Masking

  1. Remove any unwanted objects and physically cover any extraneous IR light reflections or interferences within the capture volume.

  2. [Motive:Calibration pane] In Motive, open the or use the calibration layout (CTRL + 1).

  3. [Motive:Calibration pane] Click the button from the .

Wanding

  1. Bring out the calibration wand.

  2. [Motive:Calibration pane] From the , make sure the Calibration Type is set to Full and the correct type of the wand is specified under the OptiWand section.

  3. [Motive:Calibration pane] Click Start Wanding to begin wanding.

Wanding tips

  • For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.

  • Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.

Setting the Ground Plane

Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.

  1. Now that all of the cameras have been calibrated, you need to define the ground plane of the capture volume.

  2. Place a calibration square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.

  3. Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.

Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the in Motive.

Capture Setup

Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.

Set Up for Capture Session

Motive Recordings

Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.

See Also: page.

Motive Profiles

Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.

Marker Up

Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the , go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.

See Also: page for marker types, or and page for placement directions.

Define Skeletons and Rigid Bodies

Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.

Create Rigid Body

To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the to define the Rigid Body.

Create Skeleton

To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the . Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the . In some cases, you may not need to select the markers if only the desired actor is in view.

See Also: page and page.

Record Data

Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout

Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the at the bottom, press the dimmed red record button or simply press the spacebar when in the to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.

When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.

See Also: page.

Post-Capture

Data Editing

After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the tools, which can be found in the . Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.

Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout

General Editing Steps

  1. Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.

  2. Refer to the and inspect gap percentages in each marker.

  3. Select a marker that is often occluded or misplaced.

  4. Look through the frames in the

Marker Labeling

Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using and the . These manual tools can also be used to correct any labeling errors. Read through the page for more details in assigning and editing marker labels.

  • Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.

  • Manual Label: Labeling individual markers manually using the , assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.

See Also: page.

Changing Marker Labels and Colors

When needed, you can use the to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.

Data Export

Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the . Multiple Takes can be selected and exported from Motive or by using the . From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the before exporting a file. In the export dialog window, corresponding export options are available for each file format.

See Also: page.

Data Streaming

Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 5, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the . For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific are included in the PDF documentation that ships with the respective plugins or SDK's.

See Also: page

Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.

  • Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.

  • Ethernet Cable Requirements

    Cable Type

    There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.

    Electromagnetic Shielding

    Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.

    • External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.

    Click image to enlarge.
    • Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.

    • High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.

    Click image to enlarge.
    • PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.

    Power the Optihub: Use provided power adapters to connect each OptiHub into an external power. All USB cameras will be powered by the OptiHub(s).
  • Connect the Cameras into the OptiHub(s): Use USB 2.0 cables (type B/mini-b) to connect each USB camera into an OptiHub. When using multiple OptiHubs, evenly distribute the camera connections among the OptiHubs in order to balance the processing load. Note that USB extensions are not supported when connecting a camera into an OptiHub.

  • Multiple OptiHubs: Up to four OptiHubs, 24 USB cameras, can be used in one system. When setting up multiple OptiHubs, all OptiHubs must be connected, or cascaded, in a series chain with RCA synchronization cables. More specifically, a Hub SYNC Out port of one OptiHub needs to be connected into a Hub Sync In port of another OptiHub, as shown in the diagram.

  • External Sync: When integrating external devices, use the External Sync In/Out ports that are available on each OptiHub.

  • Duo/Trio Tracking bars uses the I/O-X USB hub for powering the device (3.0 A), connecting to the computer (USB A-B), and synchronizing with external devices.

    After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.

    The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the or by clicking icon on the main toolbar.

    Control Deck

    The , located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).

    [Motive:Calibration pane] Mask the remaining extraneous reflections using Motive. Click Block Visible from the Calibration pane, or use the icon in the Camera Preview pane, to apply software masking to automatically block any light sources or reflections that cannot be removed from the volume. Once the maskings are applied, all of the extraneous reflections (white) in the 2D Camera Preview pane will be covered with red pixels.
    Bring the wand into the capture volume, and wave the wand throughout the volume and allow cameras to collect wanding samples.
  • [Motive:Calibration pane] When the system indicates enough samples have been collected, click the Calculate button to begin the calculation. This may take few minutes.

  • [Motive:Calibration pane] When the Ready to Apply button becomes enabled, click Apply Result.

  • [Motive] Calibration results window will be displayed. After examining the wanding result, click Apply to apply the calibration.

  • During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.

  • Level the calibration square parallel to the ground plane.
  • (Optional) In the 3D view in Motive, select the calibration square markers. If retro-reflective markers on the calibration square are the only reconstructions within the capture volume, Motive will automatically detect the markers.

  • Access the Ground Plane tab in the Calibration pane.

  • While the calibration square markers are selected, click Set Ground Plane from the Ground Plane Calibration Square section.

  • Motive will prompt you to save the calibration file. Save the file to the corresponding session folder.

  • , and inspect the gaps in the trajectory.
  • For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.

  • Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.

  • Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.

  • Re-Solve assets to update the solve from the edited marker data

    • OS: Windows 10, 11 (64-bit)

    • CPU: Intel i7 or better

    • RAM: 16GB of memory

    • GPU: GTX 1050 or better with the latest drivers

    • OS: Windows 10, 11 (64-bit)

    • CPU: Intel i7

    • RAM: 4GB of memory

    Quick Start Panel

    The quick start panel provides quick access to typical initial actions when using Motive. Each option will quickly lead you to the layouts and actions for corresponding selection. If you wish not to see this panel again, you can uncheck the box at the bottom. This panel can be re-accessed under the Help tab.

    Devices pane

    Connected cameras will be listed under the Devices pane. This panel is where you configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in reference mode capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.

    Properties pane

    When an item is selected in Motive, all of its related properties will be listed under the Properties pane. For an example, if you have selected a skeleton in the 3D viewport, its corresponding properties will get listed under this pane, and you can view the settings and configure them as needed. You can also select connected cameras, sync devices, rigid bodies, any external devices listed in the Device pane, or recorded Takes to view and configure their properties. This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.

    Perspective View pane

    The Perspective View pane is where 3D data is displayed in Motive. Here, you can view, analyze, and select reconstructed 3D coordinates within a calibrated capture volume. This panel can be used both in live capture and recorded data playback. You can also select multiple markers and define rigid bodies and skeleton assets. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.

    Camera Preview pane

    The Camera Preview pane shows 2D views of cameras in a system. Here you can monitor each camera view and apply mask filters. This pane is also used to examine 2D objects (circular reflections) that are captured, or filtered, in order to examine what reflections are processed and reconstructed into 3D coordinates. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.

    Rotate view

    Right + Drag

    Pan view

    Middle (wheel) click + drag

    Zoom in/out

    Mouse Wheel

    Select in View

    Left mouse click

    Toggle selection in View

    CTRL + left mouse click

    Hardware Setup
    Network setup
    reconstruct
    re-calibration
    Camera Placement
    Camera Mount Structures
    Aiming and Focusing
    Sale Engineers
    Build Your Own feature
    Motive Basics
    Motive Download Page
    Motive
    Hotkey
    Devices pane
    Aim Assist
    Cameras Viewport
    Properties pane
    Calibration pane
    Calibration
    Calibration
    Coordinate System Tools
    Calibration pane
    Camera Preview pane
    Calibration pane
    Coordinate System Tools
    Data pane
    Motive Basics
    Builder pane
    Markers
    Rigid Body Tracking
    Skeleton Tracking
    Builder pane
    Builder pane
    calibration pose
    Builder pane
    Rigid Body Tracking
    Skeleton Tracking
    Control Deck
    Live mode
    Data Recording
    Data Editing
    Edit Tools pane
    Labels pane
    Graph pane
    assets
    Labeling pane
    Labeling
    Labeling
    Labeling
    Labeling
    Marker Sets pane
    Data pane
    Motive Batch Processor
    Graph View pane
    Data Export
    OptiTrack website
    streaming protocols
    Data Streaming
    Click image to enlarge.
    Out of focus
    Moderately in focus
    In focus
    Adjusting camera settings using the Devices pane. This can also be done through the Properties pane as well.
    Retroreflective markers shown on the grayscale image.
    CS-400 calibration square
    Session folders loaded in the Data Management pane
    An example session folder in Windows File Explorer.
    Retroreflective markers placed on a quadrocopter
    The corresponding Rigid Body defined in Motive
    Markers placed for a subject.
    Markers placements shown for Baseline (41) skeleton shown in the Builder pane.
    Using Builder pane to define a skeleton
    Unlabeled passive markers displayed in white. Color settings can be adjusted from the Application Settings.
    Labeled skeleton markers displayed in assigned color. Marker colors and sticks can be modified using Constraints pane.
    Labeled Rigid Body markers displayed in assigned color. Rigid Body colors can be adjusted from the Rigid Body properties.
    Tracking data export dialogue window in Motive. CSV export is selected and corresponding export options are listed. Click image to enlarge.

    Calibration pane

    View tab
    Control Deck

    Quick Start Guide: Tutorial Videos

    This page includes all of the Motive tutorial video for visual learners.

    Updated videos coming soon!

    Motive: Installation and Activation

    Motive: Panes and Layouts

    Motive: Calibration

    Motive: 2D and 3D Views

    Motive: Creating Skeleton Assets

    Motive: Toolbars and Dropdowns

    Motive: Labeling

    Motive: File Management

    Motive: Timeline Pane

    Motive: Labeling using Markersets

    Motive: Marker Rays

    View2D ClearMask 1.11.png

    Quick Start Guide: Outdoor Tracking Setup

    PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models have powerful tracking capability that allows tracking outdoors. With strong infrared (IR) LED illuminations and some adjustments to its settings, a Prime system can overcome sunlight interference and perform 3D capture. This page provides general hardware and software system setup recommendations for outdoor captures.

    Please note that when capturing outdoors, the cameras will have shorter tracking ranges compared to when tracking indoors. Also, the system calibration will be more susceptible to change in outdoor applications because there are environmental variables (e.g. sunlight, wind, etc.) that could alter the system setup. To ensure tracking accuracy, routinely re-calibrate the cameras throughout the capture session.

    Weather and Backgrounds

    Even though it is possible to capture under the influence of the sun, it is best to pick cloudy days for captures in order to obtain the best tracking results. The reasons include the following:

    • Bright illumination from the daylight will introduce extraneous reconstructions, requiring additional effort in the post-processing on cleaning up the captured data.

    • Throughout the day, the position of the sun will continuously change as will the reflections and shadows of the nearby objects. For this reason, the camera system needs to be routinely re-masked or re-calibrated.

    The surroundings can also work to your advantage or disadvantage depending on the situation. Different outdoor objects reflect 850 nm Infrared (IR) light in different ways that can be unpredictable without testing. Lining your background with objects that are black in Infrared (IR) will help distinguish your markers from the background better which will help with tracking. Some examples of outdoor objects and their relative brightness is as follows:

    • Grass typically appears as bright white in IR.

    • Asphalt typically appears dark black in IR.

    • Concrete depends, but it's usually a gray in IR.

    Hardware Setup Recommendations

    1. [Camera Setup]

    In general, setting up a truss system for mounting the cameras is recommended for stability, but for outdoor captures, it could be too much effort to do so. For this reason, most outdoor capture applications use tripods for mounting the cameras.

    2. [Camera Setup]

    Do not aim the cameras directly towards the sun. If possible, place and aim the cameras so that they are capturing the target volume at a downward angle from above.

    3. [Camera Setup]

    Increase the f-stop setting in the Prime cameras to decrease the aperture size of the lenses. The f-stop setting determines the amount of light that is let through the lenses, and increasing the f-stop value will decrease the overall brightness of the captured image allowing the system to better accommodate for sunlight interference. Furthermore, changing this allows camera exposures to be set to a higher value, which will be discussed in the later section. Note that f-stop can be adjusted only in PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models.

    4. [Camera Setup] Utilize shadows

    Even though it is possible to capture under sunlight, the best tracking result is achieved when the capture environment is best optimized for tracking. Whenever applicable, utilize shaded areas in order to minimize the interference by sunlight.

    Motive Settings

    1. [Camera Settings]

    Increase the LED setting on the camera system to its maximum so that IR LED illuminates at its maximum strength. Strong IR illumination will allow the cameras to better differentiate the emitted IR reflections from ambient sunlight.

    2. [Camera Settings]

    In general, increasing camera exposure makes the overall image brighter, but it also allows the IR LEDs to light up and remain at its maximum brightness for a longer period of time on each frame. This way, the IR illumination is stronger on the cameras, and the imager can more easily detect the marker reflections in the IR spectrum.

    When used in combination with the increased f-stop on the lens, this adjustment will give a better distinction of IR reflections. Note that this setup applies only for outdoor applications, for indoor applications, the exposure setting is generally used to control overall brightness of the image.

    *Legacy camera models

    Camera mount setup
    Camera aim
    Lens f-stop
    Max IR LED Strength
    Camera Exposure
    For outdoor capture applications, increase the f-stop setting to decrease the lens aperture size
    Capturing outdoors under shadows
    IconAdd.png
    DataManagementPane Flag 111.png
    Camera ReferenceMode 21.png
    Tb10.png
    View2D AutoMask 1.11.png
    Camera TrackingMode.png
    Toolbar Calib 20.png
    Toolbar Properties Icon.png
    Toolbar Views 20.png
    Toolbar Views 20.png

    Quick Start Guide: Active Marker Tracking

    This page provides instructions on how to set up and use the OptiTrack active marker solution.

    Additional Note

    • This guide is for OptiTrack active markers only. Third-party IR LEDs will not work with instructions provided on this page.

    • This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.

    • Motive version 2.0 or above is required.

    • This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.

    • For active components that were shipped prior to September 2017, please see the page for more information about the firmware compatibility.

    Overview

    The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the BaseStation and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.

    Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.

    Hardware Setup

    Required Component

    BaseStation

    Sends out radio frequency signals for synchronizing the active markers.

    • Powered by PoE, connected via Ethernet cable.

    • Must be connected to one of the switches in the camera network.

    Active Marker Options

    Active Tag

    • Connects to a USB power source and illuminates the active LEDs.

    • Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.

    • Emits 850 nm IR light.

    • 4 active LEDs in each bundle and up to two bundles can be connected to each Tag.

    Active Pucks

    An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.

    Wiring the Components

    Camera System

    • Active tracking is supported only with the Ethernet camera system (Prime series or Slime 13E cameras). For instructions on how to set up a camera system see: .

    BaseStation

    • Connects to one of the PoE switches within the camera network.

    • For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.

    • Do not place external electromagnetic or radiofrequency devices near the Base Station.

    BaseStation LEDs

    Note: Behavior of the LEDs on the base station is subject to be changed.

    • Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.

    Tag Setup

    • Connect two sets of active markers (4 LEDs in each set) into a Tag.

    • Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.

    • To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.

    Puck Setup

    • Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.

    • For more information, please read through the page.

    Motive Settings

    Active Patten Depth

    Settings → Live Pipeline → Solver Tab with Default value = 12

    This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.

    Minimum Active Count

    Settings → Live Pipeline → Solver Tab with Default value = 3

    Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting

    Active Marker Color

    Settings → Views → 3D Tab with Default color = blue

    The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.

    Camera Settings

    For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:

    Camera Exposure

    For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.

    IR LEDs

    When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the .

    Active Markers in Motive

    Active Labels

    With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the . To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.

    Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.

    Duplicate active frame IDs

    For the active label to properly work, it is important that each marker has a unique active IDs. When there are more than one markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.

    Labels in Recorded 3D Data

    Unlabeled Markers

    In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.

    Auto-labeled Markers

    When a trackable asset (e.g. Rigid Body) is defined using active markers, it's active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.

    Rigid Body Definition

    Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.

    Rigid Body Properties

    If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.

    Troubleshooting

    Q : Active markers are flickering from both the 3D viewport in Motive.

    A:

    • Make sure Motive is set to tracking Active markers under the reconstruction settings. The Marker Labeling Mode must be set to either Active Markers Only or Active and Passive Markers.

    (8 Active LEDs (4(LEDs/set) x 2 set) per Tag)

  • Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA

  • When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.

    Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.

  • Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.

  • Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.

    If flickering occurs on markers of a specific Tag only, try power cycling it. If it tracks fine afterward, the Tag is using v0.8 firmware.
    compatibility notes
    Hardware Setup
    Active Puck
    Devices pane
    Live-reconstruction mode or in the 2D Mode
    BaseStation Setup
    Puck initializing. Top-left: Orange.
    Connected to base station and receiving sync packets.
    Unique marker IDs assigned to the active markers.
    Motive installation and activation

    Quick Start Guide: Prime Color Setup

    This page provides instructions on how to set up, configure, and use the Prime Color video camera.

    Overview

    Prime Color

    The Prime Color is a full-color video camera that is capable of recording synchronized high-speed and videos. It can also be hooked up to a mocap system and used as a reference camera. The camera enables recording of high frame rate videos (up to 500 FPS at 480p) with resolutions up to 1080p (at 250 FPS) by performing onboard compression (H.264) of captured frames. It connects to the camera network and receives power by a standard PoE connection.

    eStrobe

    When capturing high-speed videos, the time-length of camera exposures are very short, and thus, providing sufficient lighting becomes critical for obtaining clear images. The eStrobe is designed to optimally brighten the image taken by Prime Color camera by precisely synchronizing the illuminations of the eStrobe LEDs to each camera exposure. This allows the LEDs to illuminate at a right timing, producing the most efficient and powerful lighting for the high-speed video capture. Also, the eStrobe emits white light only, and it will not interfere with the tracking within the IR spectrum.

    The eStrobe is intended for indoor use only. For capturing outdoors, the sunlight will provide sufficient lighting for the high-speed capture.

    Computer Requirements

    General Requirements

    Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.

    Recommended
    Minimum

    Graphics Card

    For using Prime Color cameras, it requires the computer to be equipped with a dedicated graphics card that has a performance of GTX 1050, or better, with the latest driver that supports OpenGL version 4.0 or higher.

    Storage Capacity

    Since each color camera can upload a large amount of data over the network, the size of the recorded Take (TAK) can get pretty large even with a short recording. For example, if a 10-second take was recorded with a total data throughput of 1-GBps, the resulting TAK file will be 10-GB, and it can quickly fill up the storage device. Please make sure there is enough capacity available on the disk drive. If you are exporting out the recorded data onto video files after they are captured, re-encoding the videos will help with reducing the files magnitudes smaller. See:

    Write Speed

    Since Prime Color cameras can output a large amount of data to the RAM memory quickly, it is also important that the write-out speed to the storage is also fast enough. If the write-out speed to secondary drive isn't fast enough, the occupied memory in RAM storage may gradually increase to its . For recording with just a one or two Prime Color cameras, standard SSD drive will do its job. However, when using multiple Prime Color cameras, it is recommended to use a fast storage drive (e.g. M.2 SSD) that can quickly write out the recorded capture that from the RAM.

    Network Card

    When running two or more Prime Color cameras, the computer must have a 10-gigabit network adapter in order to successfully receive all of the data outputted from the camera system. Please see section for more information.

    Hardware Setup

    Camera Lens

    Different types of lenses can be equipped on a Prime Color camera as long as the lens mount is compatible, however, for Prime Color cameras, we suggest using C-mount lenses to fully utilize the imager. Prime Color cameras with C-mount can be equipped with either the 12mm F#1.8 lenses or the 6.8mm F#1.6 lenses. The 12mm lens is zoomed in more and is more suitable for capturing at long ranges. On the other hand, the 6.8mm lens has a larger field of view and is more suitable for capturing a wide area. Both lenses have adjustable f-stop and focus settings, which can be optimized for different capture environments and applications.

    • F-Stop: Set the f-stop to a low value to make the aperture size bigger. This will allow in more light onto the imager, improving the image quality. However, this may also decrease the camera's depth of field, requiring the lens to be focused specifically on the target capture area.

    • Focus: For best image quality, make sure the lenses are focused on the target tracking area.

    6.5mm F#1.6 lens: When capturing 1080p images with 6.5mm F#1.6 lens, you may see vignetting in each corner of the captured frames due to imager size limitations. For larger FOV, please use the 6.8mm F#1.6 lens to avoid this vignetting issue.

    Load Balancing

    Data Bandwidth

    Before going into details of setting up a system with Prime Color cameras, it is important to go over the data bandwidth availability within the camera network. At its maximum setting for capturing the best quality image, one Prime Color camera can transmit data at a rate of up to ~100 Megabytes-per-second (MBps), or ~800 Megabits-per-second (Mbps). For a comparison, a tracking camera in outputs data at a rate less than 1MBps, which is several magnitudes smaller than the output from a Prime Color camera. A standard network switch (1 Gb switch) and network card only support network traffic of up to 1000 Mbps (or 1 Gbps). When Prime Color camera(s) are used, they can take up a large portion, or all, of the available bandwidth, and for this reason, extra attention to bandwidth use will be needed when first setting up the system.

    When there is not enough available bandwidth, captured 2D frames may drop out due to the data bottleneck. Thus, it is important to take the bandwidth consumption into account and make sure an appropriate set of network switches (PoE and Uplink), Ethernet cables, and a network card is used. If a 1-Gb network/uplink switch is used, then only one Prime Color camera can be used at its maximum bit-rate setting. If two or more Prime Color cameras need to be used, then either a 10-Gb network setup will be required OR the setting will need to be turned down. A lower bit-rate will further compress the image with a tradeoff on the image quality, which may or may not be acceptable depending on the capture application.

    Detecting Dropped 2D Frames

    Every 2D frame drops are logged under the, and it can also be identified in the Devices pane. It will be indicated with a warning sign next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, that indicates there is a problem with receiving the camera data. If this is happening with Prime Color cameras, try lowering down the bit-rate, and if the system stops dropping frames, that means there wasn’t enough bandwidth availability. To use the cameras in a higher bit-rate setting, you will need to properly balance out the load within the available network bandwidth.

    Note: Due to the current architecture of our bug reporting in Motive, a single color camera will not display dropped frame messages. If you need these messages you will need to either connect another camera or an eSync 2 into the system.

    Cabling

    Power

    Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.

    Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.

    When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.

    eStrobes

    eStrobe Setup

    The eStrobe synchronizes with Prime Color cameras through RCA cable connection. It receives exposure signals from the cameras and synchronizes its illuminations correspondingly. Depending on the frame rate of the camera system, the eStrobe will vary its illumination frequency, and it will also vary the percent duty cycle depending on the exposure length. Multiple eStrobes can be daisy-chained in series by relaying the sync signal from the output port to the input port of another as shown in the diagram.

    Illumination:

    The eStrobe emits only white light and does not interfere with tracking within the IR spectrum. In other words, its powerful illumination will not introduce noise to the IR tracking data.

    Capturing without eStrobes

    When capturing without eStrobes, the camera entirely relies on the ambient lighting to capture the image, and the brightness of the captured frames may vary depending on which type of light source is used. In general, when capturing without an eStrobe, we recommend setting the camera at a lower framerate (30~120 FPS) and increasing the camera exposure to allow for longer exposure time so that the imager can take in more light.

    Indoor

    When capturing indoors without the eStrobe, you will be relying on the room lighting for brightening up the volume. Here, it is important to note that every type of artificial light source illuminates, or flickers, at a certain frequency (e.g. fluorescent light bulbs typically flicker at 120Hz). This is usually fast enough so that the flickering is not noticeable to human eyes, however, with high-speed cameras, the flickering may become apparent.

    When Prime Color captures at a frame rate higher than the ambient illumination frequency, you will start noticing brightness changes between consecutive frames. This happens because, with mismatching frequencies, the cameras are exposing at different points of the illumination phase. For example, if you capture at 240FPS with 120Hz light bulbs lighting up the volume, brightness of captured images may be different in even and odd numbered frames throughout the capture. Please take this into consideration and provide appropriate lighting as needed.

    Info: Frequencies of typical light bulbs

    • Fluorescent: Fluorescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.

    • Incandescent: Incandescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.

    Outdoor

    When capturing outdoors using Prime Color cameras, sunlight will typically provide enough ambient lighting. Unlike light bulbs, sunlight is emitted continuously, so there is no need to worry about the illumination frequency. Furthermore, the sun is bright enough and you should be able to capture high-quality images by adjusting only the f-stop (aperture size) and the exposure values.

    Setup Check-Point

    Now that you have set up a camera system with Prime Color, all of the connected cameras should be listed under the . At this point, you would want to launch Motive and check the following items to make sure your system is operating properly.

    • 2D Frame Delivery: There should be no dropped 2D frames. You can monitor this under the or from the . If frame drops are reported continuously, you can lower down the setting or revisit the network configuration and make sure the data loads are balanced out. For more information, section of this page.

    • CPU Usage: Open the windows task manager and check the CPU processing load. If only one of the CPU core is fully occupied, the CPU is not fast enough to process data from the color camera. In this case, you will want to use a faster CPU or lower down the setting.

    • RAM Usage:

    Camera Settings

    When you launch Motive, connected Prime Color cameras will be shown in Motive, and you will be able to configure the settings as you would do for other tracking cameras. Open up the and the , and select a Prime Color camera(s). On the Properties pane, key properties that are specific to the selected color cameras will be listed. Optimizing these settings are important in order to obtain best quality images without overflooding the network bandwidth. The key settings for the color cameras are image resolution, gamma correction, as well as compression mode and bit-rate settings, which will be covered in the following sections.

    Camera Resolution

    Default: 1920, 1080

    This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum allowable frame rate will vary. Below is the maximum allowed frame rates for each respective resolution setting.

    Resolution
    Max Frame rate

    Compression Mode

    Default: Constant Bit Rate.

    This property determines how much the captured images will be compressed. The Constant Bit-Rate mode is used by default and recommended because it is easier to control the data transfer rate and efficiently utilize the available network bandwidth.

    Constant Bit-Rate

    In the Constant Bit-Rate mode, Prime Color cameras vary the degree of image compression to match the data transmission rate given under the Bit Rate settings. At a higher bit-rate setting, the captured image will be compressed less. At a lower bit-rate setting, the captured image will be compressed more to meet the given data transfer rate, but compression artifacts may be introduced if it is set too low.

    Variable Bit-Rate

    Variable Bit-Rate setting is also available for keeping the amount of the compression constant and allowing the data transfer rate to vary. This mode can be beneficial when capturing images with objects that have detailed textures because it keeps the amount of compression same on all frames. However, this may introduce dropped frames whenever the camera tries to compress highly detailed images because it will increase the data transfer rate; which may overflow the network bandwidth as a result. For this reason, we recommend using the Constant Bit-Rate setting in most applications.

    Bit-rate

    Default: 50

    Available only while using Constant Bit-rate Mode

    Bit-rate setting determines the transmission rate outputted from the selected color camera. The value given under this setting is measured in percentage (100%) of the maximum data transmission speed, and each color camera can output up to ~100 MBps. In other words, the configured value will indirectly represent the transmission rate in Megabytes per second (MBps). At bit-rate setting of 100, the camera will capture the best quality image, however, it could overload the network if there is not enough bandwidth to handle the transmitted data.

    Since the bit-rate controls the amount of data outputted from each color camera, this is one of the most important settings when properly configuring the system. If your system is experiencing 2D frame drops, it means one of the system requirements is not met; either network bandwidth, CPU processing, or RAM/disk memory. In such cases, you could decrease the bit-rate setting and reduce the amount of data output from the color cameras.

    Image Quality

    The image quality will increase at a higher bit-rate setting because it records a larger amount of data, but this will result in large file sizes and possible frame drops due to data bandwidth bottleneck. Often, the desired result is different depending on the capture application and what it is used for. The below graph illustrates how the image quality varies depending on the camera framerate and bit-rate settings.

    Tip: Monitoring data output from each camera

    Data output from the entire camera system can be monitored through the Status Panel. Output from individual cameras can be monitored from the 2D Camera Preview pane when the Camera Info is enabled under the visual aids () option.

    Gamma

    Default : 24

    Gamma correction is a non-linear amplification of the output image. The gamma setting will adjust the brightness of dark pixels, midtone pixels, and bright pixels differently, affecting both brightness and contrast of the image. Depending on the capture environment, especially with a dark background, you may need to adjust the gamma setting to get best quality images.

    LED

    Default: On

    If you are using the to light up the capture volume, the LED setting must be enabled on the Prime Color cameras which the eStrobes connect to. When this setting is enabled, the Prime Color camera will start outputting the signals from its RCA sync output port, allowing the eStrobes to receive this signal and illuminate the LEDs.

    Calibration

    In order to calibrate the color camera into the 3D capture volume, the Prime Color camera must be equipped with an IR filter switcher. Prime Color cameras without IR filter switcher cannot be calibrated, and can only be used as a reference camera to monitor the reference views in the or in the .

    When loaded into Motive, Prime Color cameras without IR filter switcher will be hidden in the . Only Prime Color camera with the filter switcher will be shown in the 3D space.

    Prime Color FS Calibration

    The Prime Color FS is equipped with a filter switcher that allows the cameras to detect in IR spectrum. The Prime Color FS can be calibrated into the 3D capture volume using an active calibration wand with the IR LEDs. Once calibrated, the color camera will be placed within the 3D viewport along with other tracking cameras, and 3D assets (Marker Sets, Rigid Body, Skeletons, cameras) can be overlaid as shown in the image.

    To calibrate the camera, switch the Prime Color FS to the in the pane. This will switch the Color camera to detect in the IR spectrum, and then use the active wand to follow the standard process. Once the calibration is finished, you can switch the camera back to the Color Video Mode.

    Active Wand:

    Currently, we only take custom orders for the active wands, but in the future, this will be available for sale. For additional questions about active wands, please .

    Data Recording / Export

    Once you have set up the system and configured the cameras correctly, Motive is now ready to capture Takes. Recorded TAK files will contain color video along with the tracking data, and you can play them back in Motive. Also, the color reference video can be exported out from the TAK.

    Data Recording

    Once the camera is set up, you can start recording from Motive. Captured frames will be stored within the TAK file and you can access them again in Edit mode. Please note that capture files with Prime Color video images will be much larger in file size.

    Video Export

    Once the color videos have been saved onto TAK files, the captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Video for the current TAK can be exported by clicking File tab -> Export Video option in Motive, or you can also export directly from the by right-clicking on the Take(s) and clicking Export Video from the context menu. The following export dialogue window will open and you will be able to configure the export settings before outputting the files:

    Dropped Frames

    When this is set to Drop Frames, Motive will remove any dropped frames in the color video upon export. Please note that any dropped frames will be completely removed in this case, and thus, the exact frames in the exported file may not match the frames in the corresponding Motive recording. If needed, you can set this export option to Black Frame to insert black, or blank, frames in place of the dropped frames in the exported video.

    Exporting from Multiple TAKs

    If there are multiple TAK files containing reference video recordings, you can export the videos all at once in the or through the . When exporting directly from the Data pane, simply CTRL-select multiple TAK files together, right-click to bring up the context menu, and click Export Video. When using the batch processor (NMotive), the VideoExporter class can be used to export videos from loaded TAK files.

    Re-encoding Exported Video

    The size of the exported video file can be re-encoded and compressed down further by additional subsampling. This can be achieved using a third-party video processing software, and doing so can hugely reduce the size of the exported file; almost in orders of two magnitudes. This is supported by most of the high-end video editing software, but Handbrake () is a freely available open-source software that is also capable of doing this. Since the exported video file can be large in size, we suggest using one of the third-party software to re-encode the exported video file.

    FAQ / Troubleshooting

    Q: Can custom camera lenses be used?

    A: The Prime Color camera uses the standard C mount for the lens, and lenses from other vendors can be mounted onto the color camera; however, there will be no guarantee for the lens and image quality. For this reason, we suggest using lenses that we provide.

    Q: Slow memory write out

    A: If the disk drive on the host PC is not fast enough to write the data, the RAM usage will gradually creep up to its maximum memory when recording a capture. In which case, the recorded TAK file may be corrupted or incomplete. If you are seeing this issue, you will have to lower down the to reduce the amount of data or use a faster disk drive.

    Q: There are frame drops even when there is enough bandwidth availability

    A: Dropped 2D frames with Prime Color in the system can be introduced due to the following issue:

    • Network Bandwidth: Insufficient network bandwidth will cause frame drops. You will have to make sure the network setup, including the network switches, Ethernet cables, and the network adapter on the host PC, is capable of transmitting and receiving data fast enough. See:

    Quick Start Guide: Precision Capture

    With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.

    Residual Value

    Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the under the Application Settings panel.

    When one or more markers are selected in the Live mode or from the of capture data, the corresponding mean residual value is displayed over the located at the bottom-right corner of Motive.

    Capture Volume

    First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.

    Infrared Black Background Objects

    Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.

    Camera Placement

    Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the pages.

    Mounting Locations

    For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.

    Mount Securely

    For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.

    Focus and Aiming

    F-stop

    Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.

    Aim and Focus

    Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the window in Motive and use the mouse-scroll.

    For more information, please read through the workflow page.

    Motive Settings

    The following sections cover key configuration settings which need to be optimized for the precision tracking.

    Camera Settings

    Camera settings are configured using the and the both of which can be opened under the in Motive.

    Setting
    Value
    Description

    Live-Reconstruction Settings

    Live-reconstruction settings can be configured under the panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:

    Setting
    Value
    Description

    Calibration

    The following calibration instructions are specific to precision tracking. For more general information, refer to the page.

    Wands

    For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the , select Micron Series under the dropdown menu, and define the exact length under the Wand Length.

    The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.

    Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.

    Precision Capture Calibration Tips

    • Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.

    • Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.

    Calibration Results

    Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.

    Tracking

    Marker Type

    In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.

    Marker Placement

    For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the or the feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.

    There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.

    Refine Rigid Body Definition

    Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.

    See:

    Camera Temperature

    In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.

    Ambient Temperature

    An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.

    Camera Heat

    The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the in Motive (Cameras View > Eye Icon > Camera Info).

    If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.

    Attention: Vibrations

    Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:

    • Avoid touching the cameras and the camera mounts.

    • Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.

    • Closing doors, even from the outside, may be noticeable during recording.

    Reconstruction Verification

    The following methods can be used to check the tracking accuracy and to better optimize the in Motive.

    Verification Method 1

    First, go into the > select a marker, then go to the > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.

    Verification Method 2

    The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.

    Continuous Calibration

    In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page .

    Viewport16.png
    Power Requirement:

    The amount of power drawn by each eStrobe will vary depending on the system frame rate as well as the length of camera exposures, because the eStrobe is designed to vary its illumination rate and percent duty cycle depending on those settings.At maximum, one eStrobe can draw up to 240 Watts of power. A typical 110V wall outlet outputs 110V @ 15A; which totals up to 1650W of power. Also, there may be other factors such as restrictions from the surge protector or extension cords that are used. Therefore, in general, we recommend connecting no more than five eStrobes onto a single power source.

    Warning:

    • Please be aware of the hot surface. The eStrobe will get very hot as it runs.

    • Avoid looking directly at the eStrobe, it could damage your eyes.

    • Make sure the power strips or extension cords are able to handle the power. Using light-duty components could damage the cords or even the device if they cannot sufficiently handle the amount of the power drawn by the eStrobes.

    • The eStrobe is not typically needed for outdoor use. Sunlight should provide enough lighting for the capture.

    Click image to enlarge.

    LED light bulbs: Variable depending on the manufacturer.

  • eStrobe: LEDs on the eStrobe will be synchronized to the exposure signal from the cameras and illuminate at the same frequency.

  • Open the windows task manager and check the memory usage. If the RAM usage slowly creeps up to the maximum memory while recording a take, it means the disk driver is not fast enough to write out the color video from RAM. You will have to reduce the bit-rate setting or use a faster disk drive (e.g. M.2 SSD).
  • Hard Drive Space: Make sure there is enough memory capacity available on the computer. Take files (TAK) with color camera data can be quite large, and it could quickly fill up the memory, especially, when recording lightly-compress video from multiple color cameras.

  • Audio playing in background (MMCSS): When playing audio using applications (e.g. Chrome, VLC) that registers to Multimedia Class Scheduler Service (MMCSS), it will interfere with how the CPU resource is used in Motive. This service will prioritize time-sensitive multimedia applications to utilize the CPU resources as much as possible, which may cause increased latency which may lead to dropped frames. We recommend exiting out from such applications if there are any latency and frame drop issues.

    • OS: Windows 10, 11 (64-bit)

    • CPU: Intel i7 or better

    • RAM: 16GB of memory

    • GPU: GTX 1050 or better with the latest drivers

    • OS: Windows 10, 11 (64-bit)

    • CPU: Intel i5

    • RAM: 4GB of memory

    960 x 540 (540p)

    500 FPS

    1280 x 720 (720p)

    360 FPS

    1920 x 1080 (1080p)

    250 FPS

    Re-encoding
    maximum
    Load Balancing
    bit-rate
    Object Mode
    bit-rate
    Log Pane
    Devices pane
    Log pane
    Devices pane
    bit-rate
    Data Bandwidth
    bit-rate
    Devices pane
    Properties pane
    eStrobes
    2D Camera View pane
    Cameras viewport
    3D viewport
    Object mode
    Camera Preview
    Calibration
    contact us
    Data pane
    Data pane
    Motive Batch Processor
    https://handbrake.fr/
    bit-rate
    Data Bandwidth
    Click image to enlarge.
    Click image to enlarge.
    Click image to enlarge.
    Incandescent light flickering. The video was captured at (121 FPS).
    Reference video zoomed into a Rigid Body.
    Slow memory write out.

    Most stable

    For the precision capture, it is not always necessary to set the camera exposure to its lowest value. Instead, the exposure setting should be configured so that the reconstruction is most stable. Zoom into a marker and examine the jitters while changing the exposure setting, and use the exposure value that gives the most stable reconstruction. Later sections will cover how to check the reconstruction and tracking quality. For now, set this number as low as possible while maintaining the tracking without losing the contrast of the reflections.

    ≥ 0.6

    Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from where the calculated centroid is no longer reliable.

    A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.

  • Extra wanding in the target area you wish to capture will improve the tracking in the target region.

  • Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.

  • Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.

  • Gain

    1: Low (Short Range)

    Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.

    Frame Rate

    Maximum FPS

    Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.

    Threshold (THR) IR LED

    200 15

    Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.

    Residual (mm)

    < 2.00

    Set the allowable residual value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.

    Minimum Rays

    ≥ 3

    Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.

    Minimum Thresholded Pixels

    ≥ 4

    Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.

    Reconstruction Settings
    2D Mode
    Status Panel
    Hardware Setup
    2D camera preview
    Aiming and Focusing
    Devices pane
    Properties pane
    view tab
    application settings
    Application Settings
    Calibration
    Calibration pane
    OptiWand
    circularity filter
    intrusion detection
    2D filter settings
    Rigid Body Refinement
    Cameras View
    reconstructions settings
    perspective view pane
    Camera Preview pane
    Perspective View pane
    Continuous Calibration
    Multiple tracked rays contributing to a 3D reconstruction. Click image to enlarge.
    Residual offsets at the converging point of the multiple rays.
    The images shows a clear (left) and a less clear(right) image of a marker whose centroid calculation may have been compromised by extraneous bright pixels from the background.
    Capturing multiple unique vantages measure more accurate positions on all of the coordinate axis
    Cameras placed in a dome arrangement around a cubic meter volume.
    Manfrotto clamp used to mount cameras.
    Camera mounted onto a truss structure using the clamp.
    Grayscale images of a marker captured with varying camera focus
    Factory calibrated wand length indicated on the bottom label of the CMW-125
    Define the precise wand length in the Calibration Pane
    Two markers are placed too close to each other and their reflections are getting merged.
    Temperature of the processor board and the ringlight board displayed in the camera info.
    Calculated 2D centroid location from the camera's perspective (thicker white line) and centroid location derived from reconstructed marker position (thinner yellow line).
    Monitoring convergence of the tracked rays.

    Exposure (EXP)

    Circularity

    merged reflections
    Viewport16.png
    Viewport16.png
    Using the Rigid Body Refinement tool for improving asset definitions.