Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This wiki contains instructions on operating OptiTrack motion capture systems. If you are new to the system, start with the Quick Start Guides to begin your capture experience.
You can navigate through pages using links in the sidebar or using links included within the pages. You can also use the search bar provided on the top-right corner to search for page names and keywords that you are looking for. If you have any questions that are not documented in this wiki or from other provided documentation, please check our forum or contact our Support for further assistance.
OptiTrack website: http://www.optitrack.com
The Helpdesk: http://help.naturalpoint.com
NaturalPoint Forums: https://forums.naturalpoint.com
This page includes all of the Motive tutorial video for visual learners.
Updated videos coming soon!
Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.
Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
Minimize ambient lights, especially sunlight and other infrared light sources.
Clean capture volume. Remove unnecessary obstacles within the area.
Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
USB Cables: Keep USB cable length restrictions in mind, each USB 2.0 cable must not exceed 5 meters in length.
Connect the OptiHub(s) into a Host PC: Use USB 2.0 cables (type A/B) to connect each OptiHub into a host PC. To optimize available bandwidth, evenly split the OptiHub connections between different USB adapters of the host PC. For large system setups, up to two 5 meters active USB extensions can be used for connecting an OptiHub, providing total 15 meters in length.
Power the Optihub: Use provided power adapters to connect each OptiHub into an external power. All USB cameras will be powered by the OptiHub(s).
Connect the Cameras into the OptiHub(s): Use USB 2.0 cables (type B/mini-b) to connect each USB camera into an OptiHub. When using multiple OptiHubs, evenly distribute the camera connections among the OptiHubs in order to balance the processing load. Note that USB extensions are not supported when connecting a camera into an OptiHub.
Multiple OptiHubs: Up to four OptiHubs, 24 USB cameras, can be used in one system. When setting up multiple OptiHubs, all OptiHubs must be connected, or cascaded, in a series chain with RCA synchronization cables. More specifically, a Hub SYNC Out port of one OptiHub needs to be connected into a Hub Sync In port of another OptiHub, as shown in the diagram.
External Sync: When integrating external devices, use the External Sync In/Out ports that are available on each OptiHub.
Duo/Trio Tracking bars uses the I/O-X USB hub for powering the device (3.0 A), connecting to the computer (USB A-B), and synchronizing with external devices.
See Also: Network setup page.
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require re-calibration. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
See Also: Camera Placement and Camera Mount Structures pages.
In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.
To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.
See Also: Aiming and Focusing page.
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.
If you are new to Motive, we recommend you to read through Motive Basics page after going through this guide to learn about basic navigation controls in Motive.
Motive Activation Requirements
The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.
Motive 2.x license
USB Hardware Key
Host PC Requirements
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
The first time Motive 2.3.x is installed on a computer, the following software also needs to be installed:
Microsoft Visual C++ Redistributables 2013 and 2015
Microsoft DirectX 9c
OptiTrack USB Drivers
It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.
License Activation Steps
Insert the USB Hardware Key into a USB-A port on the computer. If needed, you can also use a USB-A adapter to connect.
Launch Motive
Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.
Notes on using USB Hardware Key
When connecting the USB Hardware Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.
When you first launch Motive, the Quick Start panel will show up, and you can use this panel to quickly get started on specific tasks. By default, Motive will start on the Calibration Layout. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. Note that the initial layout may be slightly different for different camera models or software licenses.
The following table briefly explains purposes of some of the panels on the initial layout:
See Also: List of UI pages from the Motive section of the wiki.
Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the Devices pane page.
To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the Aim Assist button for supported cameras. In Motive, you can right-click on the Cameras Viewport and switch the video mode in the context menu, or you can also change the video mode through the Properties pane.
Exposure Setting
The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.
Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.
In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. Read through the Calibration page to learn about the calibration process and what other tools are available for more efficient workflows.
See Also: Calibration page.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Masking
Remove any unwanted objects and physically cover any extraneous IR light reflections or interferences within the capture volume.
[Motive:Calibration pane] In Motive, open the Calibration pane or use the calibration layout (CTRL + 1).
Wanding
Bring out the calibration wand.
[Motive:Calibration pane] From the Calibration pane, make sure the Calibration Type is set to Full and the correct type of the wand is specified under the OptiWand section.
[Motive:Calibration pane] Click Start Wanding to begin wanding.
Bring the wand into the capture volume, and wave the wand throughout the volume and allow cameras to collect wanding samples.
[Motive:Calibration pane] When the system indicates enough samples have been collected, click the Calculate button to begin the calculation. This may take few minutes.
[Motive:Calibration pane] When the Ready to Apply button becomes enabled, click Apply Result.
[Motive] Calibration results window will be displayed. After examining the wanding result, click Apply to apply the calibration.
Wanding tips
For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.
During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.
Setting the Ground Plane
Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.
Now that all of the cameras have been calibrated, you need to define the ground plane of the capture volume.
Place a calibration square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
Level the calibration square parallel to the ground plane.
(Optional) In the 3D view in Motive, select the calibration square markers. If retro-reflective markers on the calibration square are the only reconstructions within the capture volume, Motive will automatically detect the markers.
Access the Ground Plane tab in the Calibration pane.
While the calibration square markers are selected, click Set Ground Plane from the Ground Plane Calibration Square section.
Motive will prompt you to save the calibration file. Save the file to the corresponding session folder.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.
Motive Recordings
See Also: Motive Basics page.
Motive Profiles
Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.
Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Builder pane, go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
See Also: Markers page for marker types, or Rigid Body Tracking and Skeleton Tracking page for placement directions.
Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.
Create Rigid Body
To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the Builder pane to define the Rigid Body.
Create Skeleton
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the Builder pane and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Builder pane. In some cases, you may not need to select the markers if only the desired actor is in view.
See Also: Rigid Body Tracking page and Skeleton Tracking page.
Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Control Deck at the bottom, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout
General Editing Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.
Manual Label: Labeling individual markers manually using the Labeling, assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.
See Also: Labeling page.
Changing Marker Labels and Colors
When needed, you can use the Marker Sets pane to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Data pane. Multiple Takes can be selected and exported from Motive or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Graph View pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 5, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
See Also: Data Streaming page
This page is an introduction showing how to use OptiTrack cameras to set up an LED Wall for Virtual Production. This process is also called In-Camera Virtual Effects or InCam VFX. This is an industry technique used to simulate the background of a film set to make it seem as if the actor is in another location.
This tutorial requires Motive 2.3.x, Unreal Engine 4.27, and the Unreal Engine: OptiTrack Live Link Plugin.
This is a list of required hardware and what each portion is used for.
The OptiTrack system is used to track the camera, calibration checkerboard, (optional) LED Wall, and (optional) any other props or additional cameras. As far as OptiTrack hardware is concerned, you will need all of the typical hardware for a motion capture system plus an eSync2, BaseStation, CinePuck, Probe, and a few extra markers. Please refer to the Quick Start Guide for instructions on how to do this.
You will need one computer to drive Motive/OptiTrack and another to drive the Unreal Engine System.
Motive PC - The CPU is the most important component and should use the latest generation of processors.
Unreal Engine PC - Both the CPU and GPU is important. However, the GPU in particular needs to be top of the line to render the scene, for example a RTX 3080 Ti. Setups that involve multiple LED walls stitched together will require graphics cards that can synchronize with each other such as the NVIDIA A6000.
The Unreal Engine computer will also require an SDI input card with both SDI and genlock support. We used the BlackMagic Decklink SDI 4K and the BlackMagic Decklink 8K Pro in our testing, but other cards will work as well.
You will need a studio video camera with SDI out, timecode in, and genlock in support. Any studio camera with these BNC ports will work, and there are a lot of different options for different budgets. Here are some suggestions:
Sony PXW-FS7 (What we use internally)
Etc...
Cameras without these synchronization features can be used, but may look like they are stuttering due to frames not perfectly aligning.
A camera dolly or other type of mounting system will be needed to move and adjust the camera around your space, so that the movement looks smooth.
Your studio camera should have a cage around it in order to mount objects to the outside of it. You will need to rigidly mount your CinePuck to the outside. We used SmallRig NATO Rail and Clamps for the cage and Rigid Body mounting fixtures.
You’ll also need a variety of cables to connect from camera back to where the Computers are located. This includes things such as power cables, BNC cables, USB extension cables (optional for powering the CinePuck), etc... These will not all be listed here, since they will depend on the particular setup for your system.
Many systems will want a lens encoder in the mix. This is only necessary if you plan on zooming your lens in/out between shoots. We do not use this device in this example for simplicity.
In order to run your LED wall, you will need two things an LED Wall and a Video Processor.
For large walls composed of LED wall subsections you will need an additional video processor and an additional render PC for each wall as well as an SDI splitter. We are using a single LED wall for simplicity.
The LED Wall portion contains the grid of LED light, the power structure, and ways to connect the panels into a video controller, but does not contain the ability to send an HDMI signal to the wall.
We used Planar TVF 125 for our video wall, but there are many other options out there depending on your needs.
The video processor is responsible for taking an HDMI/Display Port/SDI signal and rendering it on the LED wall. It's also responsible for synchronizing the refresh rate of the LED wall with external sources.
The video processor we used for controlling the LED wall was the Color Light Z6. However, Brompton Technology video processors are a more typical film standard.
You will either need a timecode generator AND a genlock generator or a device that does both. Without these devices the exposure of your camera will not align with when the LED wall renders and you may see the LED wall rendering. These signals are used to synchronize Motive, the cinema camera, LED Walls, and any other devices together.
Timecode - The timecode signal should be fed into Motive and the Cinema camera. The SDI signal from the camera will plug into the SDI card, which will carry the timecode to the Unreal Engine computer as well.
Genlock - The genlock should be fed into Motive, the cinema camera, and the Video Processor(s).
Timecode is for frame alignment. It allows you to synchronize data in post by aligning the timecode values together. (However, it does not guarantee that the cameras expose and the LED wall renders at the same time). There are a variety of different manufactures that will work for timecode generators. Here are some suggestions:
Etc...
Genlock is for frame synchronization. It allows you to synchronize data in real-time by aligning the times when a camera exposes or an LED Wall renders its image. (However, it does not align frame numbers, so one system could be on frame 1 and another on frame 23.) There are a variety of different manufactures that will work for genlock generators. Here are some suggestions:
Etc...
Below is a diagram that shows what devices are connected to each other. Both Genlock and Timecode are connected via BNC ports on each device.
Plug the Genlock Generator into:
eSync2's Genlock-In BNC port
Any of the Video Processor's BNC ports
Studio Video Camera's Genlock port
Plug the TimeCode Generator into:
eSync2's Timecode-In BNC port
Studio Video Camera's TC IN BNC port
Plug the Studio Video Camera into:
Unreal Engine PC SDI IN port for Genlock via the SDI OUT port on the Studio Video Camera
Unreal Engine PC SDI IN port for Timecode via the SDI OUT port on the Studio Video Camera
A rigid board with a black and white checkerboard on it is needed to calibrate the lens characteristics. This object will likely be replaced in the future.
There are a lot of hardware devices required, so below is a rough list of required hardware as a checklist.
Truss or other mounting structure
Prime/PrimeX Cameras
Ethernet Cables
Network Switches
Calibration Wand
Calibration Square
Motive License
License Dongle
Computer (for Motive)
Network Card for the Computer
CinePuck
BaseStation (for CinePuck)
eSync2
BNC Cables (for eSync2)
Timecode Generator
Genlock Generator
Probe (optional)
Extra markers or trackable objects (optional)
Cinema/Broadcast Camera
Camera Lens
Camera Movement Device (ex. dolly, camera rails, etc...)
Camera Cage
Camera power cables
BNC Cables (for timecode, SDI, and Genlock)
USB C extension cable for powering the CinePuck (optional)
Lens Encoder (optional)
Truss or mounting system for the LED Wall
LED Wall
Video Processor
Cables to connect between the LED Wall and Video Processor
HDMI or other video cables to connect to Unreal PC
Computer (for Unreal Engine)
SDI Card for Cinema Camera input
Video splitters (optional)
Video recorder (for recording the camera's image)
Checkerboard for Unreal calibration process
Non-LED Wall based lighting (optional)
Next, we'll cover how to configure Motive for tracking.
We assume that you have already set up and calibrated Motive before starting this video. If you need help getting started with Motive, then please refer to our Getting Started wiki page.
After calibrating Motive, you'll want to set up your active hardware. This requires a BaseStation and a CinePuck.
Plug the BaseStation into a Power over Ethernet (PoE) switch just like any other camera.
CinePuck
Firmly attach the CinePuck to your Studio Camera using your SmallRig NATO Rail and Clamps on the cage of the camera.
The CinePuck can be mounted anywhere on the camera, but for best results put the puck closer to the lens.
Turn on your CinePuck, and let it calibrate the IMU bias by waiting until the flashing red and orange lights turn into flashing green lights.
It is recommended to power the CinePuck using a USB connection for the duration of filming a scene to avoid running out of battery power; a light should turn on the CinePuck when the power is connected.
Change the tracking mode to Active + Passive.
Create a Rigid Body out of the CinePuck markers.
For active markers, turning up the residual will usually improve tracking.
Go through a refinement process in the Builder pane to get the highest quality Rigid Body.
Show advanced settings for that Rigid Body, then input the Active Tag ID and Active RF (radio frequency) Channel for your CinePuck.
If you don’t have this information, then consult the IMU tag instructions found here Active Marker Tracking: IMU Setup .
If you input the IMU properties incorrectly or it is not successfully connecting to the BaseStation, then your Rigid Body will turn red. If you input the IMU properties correctly and it successfully connects to the BaseStation, then it will turn orange and need to go through a calibration process. Please refer to the table below for more detailed information.
You will need to move the Rigid Body around in each axis until it turns back to the original color. At this point you are tracking with both the optical marker data and the IMU data through a process called sensor fusion. This takes the best aspects of both the optical motion capture data and the IMU data to make a tracking solution better than when using either individually. As an option, you may now turn the minimum markers for your Rigid Body down to 1 or even 0 for difficult tracking situations.
After Motive is configured, we'll need to setup the LED Wall and Calibration Board as trackable objects. This is not strictly necessary for the LED Wall, but will make setup easier later and make setting the ground plane correctly unimportant.
Before configuring the LED Wall and Calibration Board, you'll first want to create a probe Rigid Body. The probe can be used to measure locations in the volume using the calibrated position of the metal tip. For more information for using the probe measurement tool, please feel free to visit our wiki page Measurement Probe Kit Guide.\
Place four to six markers on the LED Wall without covering the LEDs on the Wall.
Use the probe to sample the corners of the LED Wall.
You will need to make a simple plane geometry that is the size of your LED wall using your favorite 3D editing tool such as Blender or Maya. (A sample plane comes with the Unreal Engine Live Link plugin if you need a starting place.)
If the plane does not perfectly align with the probe points, then you will need to use the gizmo tool to align the geometry. If you need help setting up or using the Gizmo tool please visit our other wiki page Gizmo Tool: Translate, Rotate, and Scale Gizmo.
Any changes you make to the geometry will need to be on the Rigid Body position and not the geometry offset.
You can make these adjustments using the Builder pane, then zeroing the Attach Geometry offsets in the Properties pane.
Place four to six markers without covering the checkered pattern.
Use probe to sample the bottom left vertex of the grid.
Use the gizmo tool to orient the Rigid Body pivot and place pivot in the sampled location.
Next, you'll need to make sure that your eSync is configured correctly.
If not already done, plug your genlock and timecode signals into the appropriately labeled eSync input ports.
Select the eSync in the Devices pane.
In the Properties pane, check to see that your timecode and genlock signals are coming in correctly at the bottom.
Then, set the Source to Video Genlock In, and set the Input Multiplier to a value of 4 if your genlock is at 30 Hz or 5 if your genlock is at a rate of roughly 24 Hz.
Your cameras should stop tracking for a few seconds, then the rate in the Devices pane should update if you are configured correctly.
Make sure to turn on Streaming in Motive, then you are all done with the Motive setup.
Start Unreal Engine and choose the default project under the “Film, Television, and Live Events” section called “InCamera VFX”
Before we get started verify that the following plugins are enabled:
Camera Calibration (Epic Games, Inc.)
OpenCV Lens Distortion (Epic Games, Inc.)
OptiTrack - LiveLink (OptiTrack)
Media Player Plugin for your capture card (For example, Blackmagic Media Player)
Media Foundation Media Player
WMF Media Player
Many of these will be already enabled.
The main setup process consists of four general steps:
1. Setup the video media data.
2. Setup FIZ and Live Link Sources
3. Track and calibrate the camera in Unreal Engine
4. Setup nDisplay
Right click in the Content Browser Panel > Media > Media Bundle and name the Media Bundle something appropriate.
Double click the Media Bundle you just created to open the properties for that object.
Set the Media Source to the Blackmagic Media Source, the Configuration to the resolution and frame rate of the camera, and set the Timecode Format to LTC (Linear Timecode).
Drag this Media Bundle object into the scene and you’ll see your video appear on a plane.
You’ll also need to create two other video sources doing roughly the same steps as above.
Right click in the Content Browser Panel > Media > Blackmagic Media Source.
Open it, then set the configuration and timecode options.
Right click in the Content Browser Panel > Media > Media Profile.
Click Configure Now, then Configure.
Under Media Sources set one of the sources to Blackmagic Media Source, then set the correct configuration and timecode properties.
Before we set up timecode and genlock, it’s best to have a few visual metrics visible to validate that things are working.
In the Viewport click the triangle dropdown > Show FPS and also click the triangle dropdown > Stat > Engine > Timecode.
This will show timecode and genlock metrics in the 3D view.
If not already open you’ll probably want the Window > Developer Tools > Timecode Provider and Window > Developer Tools > Genlock panels open for debugging.
You should notice that your timecode and genlock is noticeably incorrect which will be corrected in later steps below.
The timecode will probably just be the current time.
To create a timecode blueprint, right click in the Content Browser Panel > Blueprint > BlackmagicTimecodeProvider and name the blueprint something like “BM_Timecode”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Timecode > Timecode Provider = “BM_Timecode”.
At this point your timecode metrics should look correct.
Right click in the Content Browser Panel > Blueprint > BlackmagicCustomTimeStep and name the blueprint something like “BM_Genlock”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Framerate > Custom TimeStep = “BM_Genlock”.
Your genlock pane should be reporting correctly, and the FPS should be roughly your genlock rate.
Debugging Note: Sometimes you may need to close then restart the MediaBundle in your scene to get the video image to work.
Shortcut: There is a shortcut for setting up the basic Focus Iris Zoom file and the basic lens file. In the Content Browser pane you can click View Option and Show Plugin Content, navigate to the OptiTrackLiveLink folder, then copy the contents of this folder into your main content folder. Doing this will save you a lot of steps, but we will cover how to make these files manually as well.
We need to make a blueprint responsible for controlling our lens data.
Right click the Content Browser > Live Link > Blueprint Virtual Subject, then select the LiveLinkCameraRole in the dropdown.
Name this file something like “FIZ_Data”.
Open the blueprint. Create two new objects called Update Virtual Subject Static Data and Update Virtual Subject Frame Data.
Connect the Static Data one to Event on Initialize and the Frame Data one to Event on Update.
Right click on the blue Static Data and Frame Data pins and Split Struct Pin.
In the Update Virtual Subject Static Data object:
Disable Location Supported and Rotation Support, then Enable the Focus Distance Supported, Aperture Supported, and Focal Length Supported options.
Create three new float variables called Zoom, Iris, and Focus.
Drag them into the Event Graph and select Get to allow those variables to be accessed in the blueprint.
Connect Zoom to Frame Data Focal Length, connect Iris to Frame Data Aperture, and connect Focus to Frame Data Focus Distance.
Compile your blueprint.
Select your variables and set the default value to the lens characteristics you will be using.
For our setup we had used:
Zoom is 20 mm, Iris is f/2.8 , and the Focus is 260 cm.
Compile and save your FIZ blueprint.
Both Focus and Iris graphs should create an elongated "S" shape based on the two data points provided for each above.
To create a lens file right click the Content Browser > Miscellaneous > Lens File, then name the file appropriately.
Double click the lens file to open it.
Switch to the Lens File Panel.
Click the Focus parameter.
Right click in the graph area and choose Add Data Point, click Input Focus and enter 10, then enter 10 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 1000 and 1000.
Click the Iris parameter.
Right click in the graph area and choose Add Data Point.
Click Input Iris and enter 1.4, then enter 1.4 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 22 and 22.
Save your lens file.
The above process is to set up the valid ranges for our lens focus and iris data. If you use a lens encoder, then this data will be controlled by the input from that device.
In the Window > Live Link pane, click the + Source icon, then Add Virtual Subject.
Choose the FIZ_Data object that we created above in the FIZ Data section of this OptiTrack Wiki page and add it.
Click the + Source icon, navigate to the OptiTrack source, and click Create.
Click Presets and create a new preset.
Edit > Project Settings and Search for Live Link and set the preset that you just created as the Default Live Link Preset.
You may want to restart your project at this point to verify that the live link pane auto-populates on startup correctly. Sometimes you need to set this preset twice to get it to work.
From the Place Actors window create an Empty Actor this will act as a camera parent.
Add it to the nDisplay_InCamVFX_Config object.
Create another actor object and make it a child of the camera parent actor.
Zero out the location of the camera parent actor from the Details pane under Transform.
For out setup, in the image to the right, we have labeled the empty actor “Cine_Parent” and its child object “CineCameraActor1” .
Select the default “CineCameraActor1” object in the World Outliner pane.
In the Details pane there should be a total of two LiveLinkComponentControllers.
You can add a new one by using the + Add Component button.
For our setup we have labeled one live link controller “Lens” and the other “OptiTrack”.
Click Subject Representation and choose the Rigid Body associated with your camera.
Click Subject Representation and choose the virtual camera. Then go to “Lens” Live Link Controller then navigate to Role Controllers > Camera Role > Camera Calibration > Lens File Picker and select the lens file you created. This process allows your camera to be tracked and associates the lens data with the camera you will be using.
Select the Place Actors window to create an Empty Actor and add it to the nDisplay_InCamVFX_Config object.
Zero out the location of this actor.
In our setup we have named out Empty Actor "Checkerboard_Parent"
From the Place Actors window also create a “Camera Calibration Checkerboard” actor for validating our camera lens information later.
Make it a child of the “Checkerboard” actor from before.
Configure the Num Corner Row and Num Corner Cols.
These values should be one less than the number of black/white squares on your calibration board. For example, if your calibration board has 9 rows of alternating black and white squares and 13 columns across of black and white squares, you would input 8 in the Num Corner Row field and 12 in the Num Corner Cols field.
Also input the Square Side Length which is the measurement of a single square (black or white).
Set the Odd Cube Materials and Even Cube Materials to solid colors to make it more visible.
Select "Checkerboard_Parent" and + Add Component of a Live Link Controller.
Add the checkerboard Rigid Body from Motive as the Subject Representation.
At this point your checkerboard should be tracking in Unreal Engine.
Double click the "Lens" file from earlier and go to the Calibration Steps tab and the Lens Information section.
On the right, select your Media Source.
Set the Lens Model Name and Serial Number to some relevant values based on what physical lens you are using for your camera.
The Sensor Dimensions is the trickiest portion to get correct here.
This is the physical size of the image sensor on your camera in millimeters.
You will need to consult the documentation for your particular camera to find this information.
For example, the Sony FS7 is 1920x1080 which we'd input X = 22.78 mm and Y = 12.817 mm for the Sensor Dimensions.
The lens information will calculate the intrinsic values of the lens you are using.
Choose the Lens Distortion Checkerboard algorithm and choose the checkerboard object you created above.
The Transparency slider can be adjusted between showing the camera image, 3D scene, or a mix of both. Show at least some of the raw camera image for this step.
Place the checkerboard in the view of the camera, then click in the 2D view to take a sample of the calibration board.
You will want to give the algorithm a variety of samples mostly around the edge of the image.
You will also want to get some samples of the calibration board at two different distances. One closer to the camera and one closer to where you will be capturing video.
Taking samples can be a bit of an art form.
You will want somewhere around 15 samples.
Once you are done click Add to Lens Distortion Calibration.
With an OptiTrack system you are looking for a RMS Reprojection Error of around 0.1 at the end. Slightly higher values can be acceptable as well, but will be less accurate.
The Nodal Offset tab will calculate the extrinsics or the position of the camera relative to the OptiTrack Rigid Body.
Select the Nodal Offset Checkerboard algorithm and your checkerboard from above.
Take samples similar to the Lens Distortion section.
You will want somewhere around 5 samples.
Click Apply to Camera Parent.
This will modify the position of the “Cine_Parent" actor created above.
Set the Transparency to 0.5.
This will allow you to see both the direct feed from the camera and the 3D overlay at the same time. As long as your calibration board is correctly set up in the 3D scene, then you can verify that the 3D object perfectly overlays on the 2D studio camera image.
In the World Outliner, Right click the Edit nDisplay_InCameraVFX_Config button. This will load the controls for configuring nDisplay.
For larger setups, you will configure a display per section of the LED wall. For smaller setups, you can delete additional sections (VP_1, VP_2, and VP_3) accordingly from the 3D view and the Cluster pane.
For a single display:
Select VP_0 and in the Details pane set the Region > W and H properties to the resolution of your LED display.
Do the same for Node_0 (Master).
Select VP_0 and load the plane mesh we created to display the LED wall in Motive.
An example file for the plane mesh can be found in the Contents folder of the OptiTrack Live Link Plugin. This file defines the physical dimensions of the LED wall.
Select the "ICVFXCamera" actor, then choose your camera object under In-Camera VFX > Cine Camera Actor.
Compile and save this blueprint.
Click Export to save out the nDisplay configuration file. (This file is what you will be asked for in the future in an application called Switchboard, so save it somewhere easy to find.)
Go back to your main Unreal Engine window and click on the nDisplay object.
Click + Add Component and add a Live Link Controller.
Set the Subject Representation to the Rigid Body for your LED Wall in Motive and set the Component to Control to “SM_Screen_0”.
At this point your LED Wall should be tracked in the scene, but none of the rendering will look correct yet.
To validate that this was all setup correctly you can turn off Evaluate Live Link for your CineCamera and move it so that it is in front of the nDisplay LED Wall.
Make sure to re-enable Evaluate Live Link afterwards.
The next step would be to add whatever reference scene you want to use for your LED Wall Virtual Production shoot. For example, we just duplicated a few of the color calibrators (see image to the right) included with the sample project, so that we have some objects to visualize in the scene.
If you haven’t already you will need to go to File > Save All at this point. Ideally, you should save frequently during the whole process to make sure you don’t lose your data.
Click the double arrows above the 3D Viewport >> and choose Switchboard > Launch Switchboard Listener. This launches an application that listens for a signal from Switchboard to start your experience.
Click the double arrows above the 3D Viewport >> and choose Launch Switchboard.
If this is your first time doing this, then there will be a small installer that runs in the command window.
A popup window will appear.
Click the Browse button next to the uProject option and navigate to your project file (.uproject).
Then click Ok and the Switchboard application will launch.
In Switchboard click Add Device, choose nDisplay, click Browse and choose the nDisplay configuration file (.ndisplay) that you created previously.
In Settings, verify that the correct project, directories and nDisplay are being referenced.
Click the power plug icon to Connect all devices.
Make sure to save and close your Unreal Engine project.
Click the up arrow button to Start All Connected Devices.
The image on the LED wall should look different when you point the camera at it, since it is calculating for the distortion and position of the lens. From the view of the camera it should almost look like you are looking through a window where the LED wall is located.
You might notice that the edge of the camera’s view is a hard edge. You can fix this and expand the field of view slightly to account for small amounts of lag by going back to your Unreal Engine project into the nDisplay object.
Select the "ICVFXCamera" object in the Components pane.
In the Details pane set the Field of View Multiplier to a value of about 1.2 to account for any latency, then set the Soft Edge > Top and Bottom and Sides properties to around .25 to blur the edges.
From an outside perspective, the final product will look like a static image that updates based on where the camera is pointing. From the view of the cameras, it will essentially look like you are looking through a window to a different world.
In our example, we are just tracking a few simple objects. In real productions you’ll use high quality 3D assets and place objects in front of the LED wall that fit with the scene behind to create a more immersive experience, like seen in the image to the right. With large LED walls, the walls themselves provide the natural lighting needed to make the scene look realistic. With everything set up correctly, what you can do is only limited by your budget and imagination.
With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.
Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the Reconstruction Settings under the Application Settings panel.
When one or more markers are selected in the Live mode or from the 2D Mode of capture data, the corresponding mean residual value is displayed over the Status Panel located at the bottom-right corner of Motive.
First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.
Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.
Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the Hardware Setup pages.
Mounting Locations
For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.
Mount Securely
For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.
Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.
Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the 2D camera preview window in Motive and use the mouse-scroll.
For more information, please read through the Aiming and Focusing workflow page.
The following sections cover key configuration settings which need to be optimized for the precision tracking.
Camera settings are configured using the Devices pane and the Properties pane both of which can be opened under the view tab in Motive.
Live-reconstruction settings can be configured under the application settings panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the Application Settings page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:
The following calibration instructions are specific to precision tracking. For more general information, refer to the Calibration page.
For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the Calibration pane, select Micron Series under the OptiWand dropdown menu, and define the exact length under the Wand Length.
The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.
Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.
Precision Capture Calibration Tips
Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.
Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.
A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.
Extra wanding in the target area you wish to capture will improve the tracking in the target region.
Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.
Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.
Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.
In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.
For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the circularity filter or the intrusion detection feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group 2D filter settings to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.
There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.
Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.
In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.
An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.
The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the Cameras View in Motive (Cameras View > Eye Icon > Camera Info).
If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.
Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:
Avoid touching the cameras and the camera mounts.
Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.
Closing doors, even from the outside, may be noticeable during recording.
The following methods can be used to check the tracking accuracy and to better optimize the reconstructions settings in Motive.
The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; Perspective View pane > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.
In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page Continuous Calibration.
This page provides instructions on how to set up and use the OptiTrack active marker solution.
Additional Note
This guide is for only. Third-party IR LEDs will not work with instructions provided on this page.
This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.
Motive version 2.0 or above is required.
This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.
For active components that were shipped prior to September 2017, please see the page for more information about the firmware compatibility.
The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the BaseStation and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.
Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.
Sends out radio frequency signals for synchronizing the active markers.
Powered by PoE, connected via Ethernet cable.
Must be connected to one of the switches in the camera network.
Connects to a USB power source and illuminates the active LEDs.
Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.
Emits 850 nm IR light.
4 active LEDs in each bundle and up to two bundles can be connected to each Tag.
(8 Active LEDs (4(LEDs/set) x 2 set) per Tag)
Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA
An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.
Connects to one of the PoE switches within the camera network.
For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.
Do not place external electromagnetic or radiofrequency devices near the Base Station.
When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.
BaseStation LEDs
Note: Behavior of the LEDs on the base station is subject to be changed.
Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.
Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.
Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.
Connect two sets of active markers (4 LEDs in each set) into a Tag.
Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.
To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.
Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.
Puck Setup
Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.
Active Patten Depth
Settings → Live Pipeline → Solver Tab with Default value = 12
This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.
Minimum Active Count
Settings → Live Pipeline → Solver Tab with Default value = 3
Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting
Active Marker Color
Settings → Views → 3D Tab with Default color = blue
The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.
For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:
For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.
Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.
Duplicate active frame IDs
For the active label to properly work, it is important that each marker has a unique active IDs. When there are more than one markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.
In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.
When a trackable asset (e.g. Rigid Body) is defined using active markers, it's active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.
Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.
If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.
PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models have powerful tracking capability that allows tracking outdoors. With strong infrared (IR) LED illuminations and some adjustments to its settings, a Prime system can overcome sunlight interference and perform 3D capture. This page provides general hardware and software system setup recommendations for outdoor captures.
Please note that when capturing outdoors, the cameras will have shorter tracking ranges compared to when tracking indoors. Also, the system calibration will be more susceptible to change in outdoor applications because there are environmental variables (e.g. sunlight, wind, etc.) that could alter the system setup. To ensure tracking accuracy, routinely re-calibrate the cameras throughout the capture session.
Even though it is possible to capture under the influence of the sun, it is best to pick cloudy days for captures in order to obtain the best tracking results. The reasons include the following:
Bright illumination from the daylight will introduce extraneous reconstructions, requiring additional effort in the post-processing on cleaning up the captured data.
Throughout the day, the position of the sun will continuously change as will the reflections and shadows of the nearby objects. For this reason, the camera system needs to be routinely re-masked or re-calibrated.
The surroundings can also work to your advantage or disadvantage depending on the situation. Different outdoor objects reflect 850 nm Infrared (IR) light in different ways that can be unpredictable without testing. Lining your background with objects that are black in Infrared (IR) will help distinguish your markers from the background better which will help with tracking. Some examples of outdoor objects and their relative brightness is as follows:
Grass typically appears as bright white in IR.
Asphalt typically appears dark black in IR.
Concrete depends, but it's usually a gray in IR.
1. [Camera Setup]
In general, setting up a truss system for mounting the cameras is recommended for stability, but for outdoor captures, it could be too much effort to do so. For this reason, most outdoor capture applications use tripods for mounting the cameras.
2. [Camera Setup]
Do not aim the cameras directly towards the sun. If possible, place and aim the cameras so that they are capturing the target volume at a downward angle from above.
Increase the f-stop setting in the Prime cameras to decrease the aperture size of the lenses. The f-stop setting determines the amount of light that is let through the lenses, and increasing the f-stop value will decrease the overall brightness of the captured image allowing the system to better accommodate for sunlight interference. Furthermore, changing this allows camera exposures to be set to a higher value, which will be discussed in the later section. Note that f-stop can be adjusted only in PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models.
4. [Camera Setup] Utilize shadows
Even though it is possible to capture under sunlight, the best tracking result is achieved when the capture environment is best optimized for tracking. Whenever applicable, utilize shaded areas in order to minimize the interference by sunlight.
Increase the LED setting on the camera system to its maximum so that IR LED illuminates at its maximum strength. Strong IR illumination will allow the cameras to better differentiate the emitted IR reflections from ambient sunlight.
In general, increasing camera exposure makes the overall image brighter, but it also allows the IR LEDs to light up and remain at its maximum brightness for a longer period of time on each frame. This way, the IR illumination is stronger on the cameras, and the imager can more easily detect the marker reflections in the IR spectrum.
When used in combination with the increased f-stop on the lens, this adjustment will give a better distinction of IR reflections. Note that this setup applies only for outdoor applications, for indoor applications, the exposure setting is generally used to control overall brightness of the image.
*Legacy camera models
This page provides instructions on how to set up, configure, and use the Prime Color video camera.
Prime Color
The Prime Color is a full-color video camera that is capable of recording synchronized high-speed and videos. It can also be hooked up to a mocap system and used as a reference camera. The camera enables recording of high frame rate videos (up to 500 FPS at 480p) with resolutions up to 1080p (at 250 FPS) by performing onboard compression (H.264) of captured frames. It connects to the camera network and receives power by a standard PoE connection.
eStrobe
When capturing high-speed videos, the time-length of camera exposures are very short, and thus, providing sufficient lighting becomes critical for obtaining clear images. The eStrobe is designed to optimally brighten the image taken by Prime Color camera by precisely synchronizing the illuminations of the eStrobe LEDs to each camera exposure. This allows the LEDs to illuminate at a right timing, producing the most efficient and powerful lighting for the high-speed video capture. Also, the eStrobe emits white light only, and it will not interfere with the tracking within the IR spectrum.
The eStrobe is intended for indoor use only. For capturing outdoors, the sunlight will provide sufficient lighting for the high-speed capture.
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
For using Prime Color cameras, it requires the computer to be equipped with a dedicated graphics card that has a performance of GTX 1050, or better, with the latest driver that supports OpenGL version 4.0 or higher.
Different types of lenses can be equipped on a Prime Color camera as long as the lens mount is compatible, however, for Prime Color cameras, we suggest using C-mount lenses to fully utilize the imager. Prime Color cameras with C-mount can be equipped with either the 12mm F#1.8 lenses or the 6.8mm F#1.6 lenses. The 12mm lens is zoomed in more and is more suitable for capturing at long ranges. On the other hand, the 6.8mm lens has a larger field of view and is more suitable for capturing a wide area. Both lenses have adjustable f-stop and focus settings, which can be optimized for different capture environments and applications.
F-Stop: Set the f-stop to a low value to make the aperture size bigger. This will allow in more light onto the imager, improving the image quality. However, this may also decrease the camera's depth of field, requiring the lens to be focused specifically on the target capture area.
Focus: For best image quality, make sure the lenses are focused on the target tracking area.
6.5mm F#1.6 lens: When capturing 1080p images with 6.5mm F#1.6 lens, you may see vignetting in each corner of the captured frames due to imager size limitations. For larger FOV, please use the 6.8mm F#1.6 lens to avoid this vignetting issue.
Detecting Dropped 2D Frames
Note: Due to the current architecture of our bug reporting in Motive, a single color camera will not display dropped frame messages. If you need these messages you will need to either connect another camera or an eSync 2 into the system.
Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.
Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.
When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.
The eStrobe synchronizes with Prime Color cameras through RCA cable connection. It receives exposure signals from the cameras and synchronizes its illuminations correspondingly. Depending on the frame rate of the camera system, the eStrobe will vary its illumination frequency, and it will also vary the percent duty cycle depending on the exposure length. Multiple eStrobes can be daisy-chained in series by relaying the sync signal from the output port to the input port of another as shown in the diagram.
Illumination:
The eStrobe emits only white light and does not interfere with tracking within the IR spectrum. In other words, its powerful illumination will not introduce noise to the IR tracking data.
Power Requirement:
Warning:
Please be aware of the hot surface. The eStrobe will get very hot as it runs.
Avoid looking directly at the eStrobe, it could damage your eyes.
Make sure the power strips or extension cords are able to handle the power. Using light-duty components could damage the cords or even the device if they cannot sufficiently handle the amount of the power drawn by the eStrobes.
The eStrobe is not typically needed for outdoor use. Sunlight should provide enough lighting for the capture.
When capturing without eStrobes, the camera entirely relies on the ambient lighting to capture the image, and the brightness of the captured frames may vary depending on which type of light source is used. In general, when capturing without an eStrobe, we recommend setting the camera at a lower framerate (30~120 FPS) and increasing the camera exposure to allow for longer exposure time so that the imager can take in more light.
Indoor
When capturing indoors without the eStrobe, you will be relying on the room lighting for brightening up the volume. Here, it is important to note that every type of artificial light source illuminates, or flickers, at a certain frequency (e.g. fluorescent light bulbs typically flicker at 120Hz). This is usually fast enough so that the flickering is not noticeable to human eyes, however, with high-speed cameras, the flickering may become apparent.
When Prime Color captures at a frame rate higher than the ambient illumination frequency, you will start noticing brightness changes between consecutive frames. This happens because, with mismatching frequencies, the cameras are exposing at different points of the illumination phase. For example, if you capture at 240FPS with 120Hz light bulbs lighting up the volume, brightness of captured images may be different in even and odd numbered frames throughout the capture. Please take this into consideration and provide appropriate lighting as needed.
Info: Frequencies of typical light bulbs
Fluorescent: Fluorescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.
Incandescent: Incandescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.
LED light bulbs: Variable depending on the manufacturer.
eStrobe: LEDs on the eStrobe will be synchronized to the exposure signal from the cameras and illuminate at the same frequency.
Outdoor
When capturing outdoors using Prime Color cameras, sunlight will typically provide enough ambient lighting. Unlike light bulbs, sunlight is emitted continuously, so there is no need to worry about the illumination frequency. Furthermore, the sun is bright enough and you should be able to capture high-quality images by adjusting only the f-stop (aperture size) and the exposure values.
RAM Usage: Open the windows task manager and check the memory usage. If the RAM usage slowly creeps up to the maximum memory while recording a take, it means the disk driver is not fast enough to write out the color video from RAM. You will have to reduce the bit-rate setting or use a faster disk drive (e.g. M.2 SSD).
Hard Drive Space: Make sure there is enough memory capacity available on the computer. Take files (TAK) with color camera data can be quite large, and it could quickly fill up the memory, especially, when recording lightly-compress video from multiple color cameras.
Default: 1920, 1080
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum allowable frame rate will vary. Below is the maximum allowed frame rates for each respective resolution setting.
Default: Constant Bit Rate.
This property determines how much the captured images will be compressed. The Constant Bit-Rate mode is used by default and recommended because it is easier to control the data transfer rate and efficiently utilize the available network bandwidth.
Constant Bit-Rate
In the Constant Bit-Rate mode, Prime Color cameras vary the degree of image compression to match the data transmission rate given under the Bit Rate settings. At a higher bit-rate setting, the captured image will be compressed less. At a lower bit-rate setting, the captured image will be compressed more to meet the given data transfer rate, but compression artifacts may be introduced if it is set too low.
Variable Bit-Rate
Variable Bit-Rate setting is also available for keeping the amount of the compression constant and allowing the data transfer rate to vary. This mode can be beneficial when capturing images with objects that have detailed textures because it keeps the amount of compression same on all frames. However, this may introduce dropped frames whenever the camera tries to compress highly detailed images because it will increase the data transfer rate; which may overflow the network bandwidth as a result. For this reason, we recommend using the Constant Bit-Rate setting in most applications.
Default: 50
Available only while using Constant Bit-rate Mode
Bit-rate setting determines the transmission rate outputted from the selected color camera. The value given under this setting is measured in percentage (100%) of the maximum data transmission speed, and each color camera can output up to ~100 MBps. In other words, the configured value will indirectly represent the transmission rate in Megabytes per second (MBps). At bit-rate setting of 100, the camera will capture the best quality image, however, it could overload the network if there is not enough bandwidth to handle the transmitted data.
Since the bit-rate controls the amount of data outputted from each color camera, this is one of the most important settings when properly configuring the system. If your system is experiencing 2D frame drops, it means one of the system requirements is not met; either network bandwidth, CPU processing, or RAM/disk memory. In such cases, you could decrease the bit-rate setting and reduce the amount of data output from the color cameras.
Image Quality
The image quality will increase at a higher bit-rate setting because it records a larger amount of data, but this will result in large file sizes and possible frame drops due to data bandwidth bottleneck. Often, the desired result is different depending on the capture application and what it is used for. The below graph illustrates how the image quality varies depending on the camera framerate and bit-rate settings.
Tip: Monitoring data output from each camera
Default : 24
Gamma correction is a non-linear amplification of the output image. The gamma setting will adjust the brightness of dark pixels, midtone pixels, and bright pixels differently, affecting both brightness and contrast of the image. Depending on the capture environment, especially with a dark background, you may need to adjust the gamma setting to get best quality images.
Default: On
The Prime Color FS is equipped with a filter switcher that allows the cameras to detect in IR spectrum. The Prime Color FS can be calibrated into the 3D capture volume using an active calibration wand with the IR LEDs. Once calibrated, the color camera will be placed within the 3D viewport along with other tracking cameras, and 3D assets (Marker Sets, Rigid Body, Skeletons, cameras) can be overlaid as shown in the image.
Active Wand:
Once you have set up the system and configured the cameras correctly, Motive is now ready to capture Takes. Recorded TAK files will contain color video along with the tracking data, and you can play them back in Motive. Also, the color reference video can be exported out from the TAK.
Once the camera is set up, you can start recording from Motive. Captured frames will be stored within the TAK file and you can access them again in Edit mode. Please note that capture files with Prime Color video images will be much larger in file size.
When this is set to Drop Frames, Motive will remove any dropped frames in the color video upon export. Please note that any dropped frames will be completely removed in this case, and thus, the exact frames in the exported file may not match the frames in the corresponding Motive recording. If needed, you can set this export option to Black Frame to insert black, or blank, frames in place of the dropped frames in the exported video.
Recommended | Minimum |
---|---|
UI Name | Description |
---|---|
Function | Default Control |
---|---|
[Motive:Calibration pane] Click the button from the Camera Preview pane.
[Motive:Calibration pane] Mask the remaining extraneous reflections using Motive. Click Block Visible from the Calibration pane, or use the icon in the Camera Preview pane, to apply software masking to automatically block any light sources or reflections that cannot be removed from the volume. Once the maskings are applied, all of the extraneous reflections (white) in the 2D Camera Preview pane will be covered with red pixels.
Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the Data pane by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.
Connected | Calibrating | No Incoming Data | Status |
---|---|---|---|
Setting | Value | Description |
---|---|---|
Setting | Value | Description |
---|---|---|
First, go into the perspective view pane > select a marker, then go to the Camera Preview pane > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.
Active tracking is supported only with the Ethernet camera system (Prime series or Slime 13E cameras). For instructions on how to set up a camera system see: .
For more information, please read through the page.
When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the .
With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the . To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.
3. [Camera Setup]
1. [Camera Settings]
2. [Camera Settings]
Recommended | Minimum |
---|
Since each color camera can upload a large amount of data over the network, the size of the recorded Take (TAK) can get pretty large even with a short recording. For example, if a 10-second take was recorded with a total data throughput of 1-GBps, the resulting TAK file will be 10-GB, and it can quickly fill up the storage device. Please make sure there is enough capacity available on the disk drive. If you are exporting out the recorded data onto video files after they are captured, re-encoding the videos will help with reducing the files magnitudes smaller. See:
Since Prime Color cameras can output a large amount of data to the RAM memory quickly, it is also important that the write-out speed to the storage is also fast enough. If the write-out speed to secondary drive isn't fast enough, the occupied memory in RAM storage may gradually increase to its . For recording with just a one or two Prime Color cameras, standard SSD drive will do its job. However, when using multiple Prime Color cameras, it is recommended to use a fast storage drive (e.g. M.2 SSD) that can quickly write out the recorded capture that from the RAM.
When running two or more Prime Color cameras, the computer must have a 10-gigabit network adapter in order to successfully receive all of the data outputted from the camera system. Please see section for more information.
Before going into details of setting up a system with Prime Color cameras, it is important to go over the data bandwidth availability within the camera network. At its maximum setting for capturing the best quality image, one Prime Color camera can transmit data at a rate of up to ~100 Megabytes-per-second (MBps), or ~800 Megabits-per-second (Mbps). For a comparison, a tracking camera in outputs data at a rate less than 1MBps, which is several magnitudes smaller than the output from a Prime Color camera. A standard network switch (1 Gb switch) and network card only support network traffic of up to 1000 Mbps (or 1 Gbps). When Prime Color camera(s) are used, they can take up a large portion, or all, of the available bandwidth, and for this reason, extra attention to bandwidth use will be needed when first setting up the system.
When there is not enough available bandwidth, captured 2D frames may drop out due to the data bottleneck. Thus, it is important to take the bandwidth consumption into account and make sure an appropriate set of network switches (PoE and Uplink), Ethernet cables, and a network card is used. If a 1-Gb network/uplink switch is used, then only one Prime Color camera can be used at its maximum bit-rate setting. If two or more Prime Color cameras need to be used, then either a 10-Gb network setup will be required OR the setting will need to be turned down. A lower bit-rate will further compress the image with a tradeoff on the image quality, which may or may not be acceptable depending on the capture application.
Every 2D frame drops are logged under the, and it can also be identified in the Devices pane. It will be indicated with a warning sign next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, that indicates there is a problem with receiving the camera data. If this is happening with Prime Color cameras, try lowering down the bit-rate, and if the system stops dropping frames, that means there wasn’t enough bandwidth availability. To use the cameras in a higher bit-rate setting, you will need to properly balance out the load within the available network bandwidth.
The amount of power drawn by each eStrobe will vary depending on the system frame rate as well as the length of camera , because the eStrobe is designed to vary its illumination rate and percent duty cycle depending on those settings.At maximum, one eStrobe can draw up to 240 Watts of power. A typical 110V wall outlet outputs 110V @ 15A; which totals up to 1650W of power. Also, there may be other factors such as restrictions from the surge protector or extension cords that are used. Therefore, in general, we recommend connecting no more than five eStrobes onto a single power source.
Now that you have set up a camera system with Prime Color, all of the connected cameras should be listed under the . At this point, you would want to launch Motive and check the following items to make sure your system is operating properly.
2D Frame Delivery: There should be no dropped 2D frames. You can monitor this under the or from the . If frame drops are reported continuously, you can lower down the setting or revisit the network configuration and make sure the data loads are balanced out. For more information, section of this page.
CPU Usage: Open the windows task manager and check the CPU processing load. If only one of the CPU core is fully occupied, the CPU is not fast enough to process data from the color camera. In this case, you will want to use a faster CPU or lower down the setting.
When you launch Motive, connected Prime Color cameras will be shown in Motive, and you will be able to configure the settings as you would do for other tracking cameras. Open up the and the , and select a Prime Color camera(s). On the Properties pane, key properties that are specific to the selected color cameras will be listed. Optimizing these settings are important in order to obtain best quality images without overflooding the network bandwidth. The key settings for the color cameras are image resolution, gamma correction, as well as compression mode and bit-rate settings, which will be covered in the following sections.
Resolution | Max Frame rate |
---|
Data output from the entire camera system can be monitored through the Status Panel. Output from individual cameras can be monitored from the 2D Camera Preview pane when the Camera Info is enabled under the visual aids () option.
If you are using the to light up the capture volume, the LED setting must be enabled on the Prime Color cameras which the eStrobes connect to. When this setting is enabled, the Prime Color camera will start outputting the signals from its RCA sync output port, allowing the eStrobes to receive this signal and illuminate the LEDs.
In order to calibrate the color camera into the 3D capture volume, the Prime Color camera must be equipped with an IR filter switcher. Prime Color cameras without IR filter switcher cannot be calibrated, and can only be used as a reference camera to monitor the reference views in the or in the .
When loaded into Motive, Prime Color cameras without IR filter switcher will be hidden in the . Only Prime Color camera with the filter switcher will be shown in the 3D space.
To calibrate the camera, switch the Prime Color FS to the in the pane. This will switch the Color camera to detect in the IR spectrum, and then use the active wand to follow the standard process. Once the calibration is finished, you can switch the camera back to the Color Video Mode.
Currently, we only take custom orders for the active wands, but in the future, this will be available for sale. For additional questions about active wands, please .
Once the color videos have been saved onto TAK files, the captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Video for the current TAK can be exported by clicking File tab -> Export Video option in Motive, or you can also export directly from the by right-clicking on the Take(s) and clicking Export Video from the context menu. The following export dialogue window will open and you will be able to configure the export settings before outputting the files:
If there are multiple TAK files containing reference video recordings, you can export the videos all at once in the or through the . When exporting directly from the Data pane, simply CTRL-select multiple TAK files together, right-click to bring up the context menu, and click Export Video. When using the batch processor (NMotive), the VideoExporter class can be used to export videos from loaded TAK files.
The size of the exported video file can be re-encoded and compressed down further by additional subsampling. This can be achieved using a third-party video processing software, and doing so can hugely reduce the size of the exported file; almost in orders of two magnitudes. This is supported by most of the high-end video editing software, but Handbrake () is a freely available open-source software that is also capable of doing this. Since the exported video file can be large in size, we suggest using one of the third-party software to re-encode the exported video file.
A: If the disk drive on the host PC is not fast enough to write the data, the RAM usage will gradually creep up to its maximum memory when recording a capture. In which case, the recorded TAK file may be corrupted or incomplete. If you are seeing this issue, you will have to lower down the to reduce the amount of data or use a faster disk drive.
Network Bandwidth: Insufficient network bandwidth will cause frame drops. You will have to make sure the network setup, including the network switches, Ethernet cables, and the network adapter on the host PC, is capable of transmitting and receiving data fast enough. See:
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers
OS: Windows 10, 11 (64-bit)
CPU: Intel i7
RAM: 4GB of memory
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle selection in View
CTRL + left mouse click
Gain
1: Low (Short Range)
Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.
Frame Rate
Maximum FPS
Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.
Threshold (THR) IR LED
200 15
Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.
Exposure (EXP)
Most stable
For the precision capture, it is not always necessary to set the camera exposure to its lowest value. Instead, the exposure setting should be configured so that the reconstruction is most stable. Zoom into a marker and examine the jitters while changing the exposure setting, and use the exposure value that gives the most stable reconstruction. Later sections will cover how to check the reconstruction and tracking quality. For now, set this number as low as possible while maintaining the tracking without losing the contrast of the reflections.
Residual (mm)
< 2.00
Set the allowable residual value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.
Minimum Rays
≥ 3
Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.
Minimum Thresholded Pixels
≥ 4
Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.
Circularity
≥ 0.6
Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from merged reflections where the calculated centroid is no longer reliable.
|
|
960 x 540 (540p) | 500 FPS |
1280 x 720 (720p) | 360 FPS |
1920 x 1080 (1080p) | 250 FPS |
Quick Start Panel
The quick start panel provides quick access to typical initial actions when using Motive. Each option will quickly lead you to the layouts and actions for corresponding selection. If you wish not to see this panel again, you can uncheck the box at the bottom. This panel can be re-accessed under the Help tab.
Devices pane
Properties pane
Perspective View pane
Camera Preview pane
Calibration pane
Control Deck
The Control Deck, located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).
Viewport
When the color of Rigid Body is the same as the assigned Rigid Body color, it indicates Motive is connected to the IMU and receiving data.
If the color is orange, it indicate the IMU is attempting to calibrate. Slowly rotate the object until the IMU finishes calibrating.
If the color is red, it indicates the Rigid Body is configured for receiving IMU data, but no data is coming through the designated RF channel. Make sure Active Tag ID and RF channel values mat the configuration on the active Tag/Puck.
Description
This page is for the general specifications of the Prime Color camera. For details on how to setup and use the Prime color, please refer to the Prime Color Setup page in this wiki.
Connected cameras will be listed under the Devices pane. This panel is where you configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in reference mode capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
When an item is selected in Motive, all of its related properties will be listed under the Properties pane. For an example, if you have selected a skeleton in the 3D viewport, its corresponding properties will get listed under this pane, and you can view the settings and configure them as needed. You can also select connected cameras, sync devices, rigid bodies, any external devices listed in the Device pane, or recorded Takes to view and configure their properties. This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
The Perspective View pane is where 3D data is displayed in Motive. Here, you can view, analyze, and select reconstructed 3D coordinates within a calibrated capture volume. This panel can be used both in live capture and recorded data playback. You can also select multiple markers and define rigid bodies and skeleton assets. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.
The Camera Preview pane shows 2D views of cameras in a system. Here you can monitor each camera view and apply mask filters. This pane is also used to examine 2D objects (circular reflections) that are captured, or filtered, in order to examine what reflections are processed and reconstructed into 3D coordinates. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.
The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the View tab or by clicking icon on the main toolbar.
Download the Motive 3.1 software installer from the Motive Download Page to each host PC.
Run the installer and follow its prompts.
Each V120:Duo and V120:Trio includes a free license to Motive:Tracker for one device. No software license activation or security key is required.
To use multiple V120 devices, connect each one to a separate host PC with Motive installed.
Please see the Host PC Requirements section of the Installation and Activation page for computer specifications.
V120 Duo or Trio device
I/O-X (breakout box)
Power adapter and cord
Camera bar cable (attached to I/O-X)
USB Uplink cable
Mount the camera bar in the designated location.
Connect the Camera Bar Cable to the back of the camera and to the I/O-X device, as shown in the diagram above.
Connect the I/O-X device to the PC using the USB uplink cable.
Connect the power cable to the I/O-X device and plug it into a power source.
Make sure the power is disconnected from the I/O-X (breakout box) before plugging or unplugging the Camera Bar Cable. Hot-plugging this cable may damage the device.
The V120 cameras use a preset frequency for timing and can run at 25 Hz, 50 Hz or 100 Hz. To synchronize other devices with the Duo or Trio, use a BNC cable to connect an input port on the receiving device to the Sync Out port on the I/O-X device.
Output options are set in the Properties pane. Select T-Bar Sync in the Devices pane to change output options:
Exposure Time: Sends a high signal based on when the camera exposes.
Passthrough: Sync In signal is passed through to the output port.
Recording Gate: Low electrical signal (0V) when not recording and a high (3.3V) signal when recording is in progress.
Gated Exposure Time: ends a high signal based on when the camera exposes, only while recording is in progress.
Timing signals from other devices can be attached to the V120 using the I/O-X device's Sync In port and a BNC cable. However, this port does not allow you to change the rate of the device reliably. The only functionality that may work is passing the data through to the output port.
The Sync In port cannot be used to change the camera's frequency reliably.
The V120 ships with a free license for Motive:Tracker installed.
The cameras are pre-calibrated and no wanding is required. The user can set the ground plane.
The V120 runs in Precision, Grayscale, and MJPEG modes. Object mode is not available.
LED lights on the back of the V120 indicate the device's status.
Color | Definition |
---|---|
Color | Definition |
---|---|
None
Device is off.
Red
Device is on.
Amber
Device is recognized by Motive.
None
Tracking/video is not enabled.
Solid Red
Configured for External-Sync: Sync Not Detected
Flashing Red
Configured for Default, Free Run Mode,
or External-Sync: Sync Detected
Solid Green
Configured for Internal-Sync: Sync Missing
Flashing Green
Configured for Internal-Sync: Sync Present
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left.
If you wish to change the location and orientation of the global axis, you can use the ground plane tools from the Calibration pane and use a Rigid Body or a calibration square to set the global origin.
When using the Duo/Trio tracking bars, you can set the coordinate origin at the desired location and orientation using either a Rigid Body or a calibration square as a reference point. Using a calibration square will allow you to set the origin more accurately. You can also use a custom calibration square to set this.
Adjustig the Coordinate System Steps
First set place the calibration square at the desired origin. If you are using a Rigid Body, its pivot point position and orientation will be used as the reference.
[Motive] Open the Calibration pane.
[Motive] Open the Ground Planes page.
[Motive] Select the type of calibration square that will be used as a reference to set the global origin. Set it to Auto if you are using a calibration square from us. If you are using a Rigid Body, select the Rigid Body option from the drop-down menu. If you are using a custom calibration square, you will need to set the vertical offset also.
[Motive] Select the Calibration square markers or the Rigid Body markers from the Perspective View pane
[Motive] Click Set Set Ground Plane button, and the global origin will be adjusted.
In optical motion capture systems, proper camera placement is very important in order to efficiently utilize the captured images from each camera. Before setting up the cameras, it is good idea to plan ahead and create a blueprint of the camera placement layout. This page highlights the key aspects and tips for efficient camera placements.
A well-arranged camera placement can significantly improve the tracking quality. When tracking markers, 3D coordinates are reconstructed from the 2D views seen by each camera in the system. More specifically, correlated 2D marker positions are triangulated to compute the 3D position of each marker. Thus, having multiple distinct vantages on the target volume is beneficial because it allows wider angles for the triangulation algorithm, which in turn improves the tracking quality. Accordingly, an efficient camera arrangement should have cameras distributed appropriately around the capture volume. By doing so, not only the tracking accuracy will be improved, but uncorrelated rays and marker occlusions will also be prevented. Depending on the type of tracking application, capture volume environment, and the size of a mocap system, proper camera placement layouts may vary.
An ideal camera placement varies depending on the capture application. In order to figure out the best placements for a specific application, a clear understanding of the fundamentals of optical motion capture is necessary.
To calculate 3D marker locations, tracked markers must be simultaneously captured by at least two synchronized cameras in the system. When not enough cameras are capturing the 2D positions, the 3D marker will not be present in the captured data. As a result, the collected marker trajectory will have gaps, and the accuracy of the capture will be reduced. Furthermore, extra effort and time will be required for post-processing the data. Thus, marker visibility throughout the capture is very important for tracking quality, and cameras need to be capturing at diverse vantages so that marker occlusions are minimized.
Depending on captured motion types and volume settings, the instructions for ideal camera arrangement vary. For applications that require tracking markers at low heights, it would be beneficial to have some cameras placed and aimed at low elevations. For applications tracking markers placed strictly on the front of the subject, cameras on the rear won't see those and as a result, become unnecessary. For large volume setups, installing cameras circumnavigating the volume at the highest elevation will maximize camera coverage and the capture volume size. For captures valuing extreme accuracy, it is better to place cameras close to the object so that cameras capture more pixels per marker and more accurately track small changes in their position.
Again, the optimal camera arrangement depends on the purpose and features of the capture application. Plan the camera placement specific to the capture application so that the capability of the provided system is fully utilized. Please contact us if you need consulting with figuring out the optimal camera arrangement.
For common applications of tracking 3D position and orientation of Skeletons and Rigid Bodies, place the cameras on the periphery of the capture volume. This setup typically maximizes the camera overlap and minimizes wasted camera coverage. General tips include the following:
Mount cameras at the desired maximum height of the capture volume.
Distribute the cameras equidistantly around the setup area.
Adjust angles of cameras and aim them towards the target volume.
For cameras with rectangular FOVs, mount the cameras in landscape orientation. In very small setup areas, cameras can be aimed in portrait orientation to increase vertical coverage, but this typically reduces camera overlap, which can reduce marker continuity and data quality.
TIP: For capture setups involving large camera counts, it is useful to separate the capture volume into two or more sections. This reduces amount of computation load for the software.
Around the volume
For common applications tracking a Skeleton or a Rigid Body to obtain the 6 Degrees of Freedom (x,y,z-position and orientation) data, it is beneficial to arrange the cameras around the periphery of the capture volume for tracking markers both in front and back of the subject.
Camera Elevations
For typical motion capture setup, placing cameras at high elevations is recommended. Doing so maximizes the capture coverage in the volume, and also minimizes the chance of subjects bumping into the truss structure which can degrade calibration. Furthermore, when cameras are placed at low elevations and aimed across from one another, the synchronized IR illuminations from each camera will be detected, and will need to be masked from the 2D view.
However, it can be beneficial to place cameras at varying elevations. Doing so will provide more diverse viewing angles from both high and low elevations and can significantly increase the coverage of the volume. The frequency of marker occlusions will be reduced, and the accuracy of detecting the marker elevations will be improved.
Camera to Camera Distance
Separating every camera by a consistent distance is recommended. When cameras are placed in close vicinity, they capture similar images on the tracked subject, and the extra image will not contribute to preventing occlusions or the reconstruction calculations. This overlap detracts from the benefit of a higher camera count and also doubles the computational load for the calibration process. Moreover, this also increases the chance of marker occlusions because markers will be blocked from multiple views simultaneously whenever obstacles are introduced.
Camera to Object Distance
An ideal distance between a camera and the captured subject also depends on the purpose of the capture. A long distance between the camera and the object gives more camera coverage for larger volume setups. On the other hand, capturing at a short distance will have less camera coverage but the tracking measurements will be more accurate. The cameras lens focus ring may need to be adjusted for close-up tracking applications.
Before setting up a motion capture system, choose a suitable setup area and prepare it in order to achieve the best tracking performance. This page highlights some of the considerations to make when preparing the setup area for general tracking applications. Note that this page provides just general recommendations and these could vary depending on the size of a system or purpose of the capture.
First of all, pick a place to set up the capture volume.
Setup Area Size
System setup area depends on the size of the mocap system and how the cameras are positioned. To get a general idea, check out the Build Your Own feature on our website.
Make sure there is plenty of room for setting up the cameras. It is usually beneficial to have extra space in case the system setup needs to be altered. Also, pick an area where there is enough vertical spacing as well. Setting up the cameras at a high elevation is beneficial because it gives wider lines of sight for the cameras, providing a better coverage of the capture volume.
Minimal Foot Traffic
After camera system calibration, the system should remain unaltered in order to maintain the calibration quality. Physical contacts on cameras could change the setup, requiring it to be re-calibrated. To prevent such cases, pick a space where there is only minimal foot traffic.
Flooring
Avoid reflective flooring. The IR lights from the cameras could be reflected by it and interfere with tracking. If this is inevitable, consider covering the floor with surface mats to prevent the reflections.
Avoid flexible or deformable flooring; such flooring can negatively impact your system's calibration.
For the best tracking performance, minimize ambient light interference within the setup area. The motion capture cameras track the markers by detecting reflected infrared light and any extraneous IR lights that exist within the capture volume could interfere with the tracking.
Sunlight: Block any open windows that might let sunlight in. Sunlight contains wavelength within the IR spectrum and could interfere with the cameras.
IR Light sources: Remove any unnecessary lights in IR wavelength range from the capture volume. IR lights could be emitted from sources such as incandescent, halogen, and high-pressure sodium lights or any other IR based devices.
All cameras are equipped with IR filters, so extraneous lights outside of the infrared spectrum (e.g. fluorescent lights) will not interfere with the cameras. IR lights that cannot be removed or blocked from the setup area can be masked in Motive using the Masking Tools during the system calibration. However, this feature completely discards image data within the masked regions and an overuse of it could negatively impact tracking. Thus, it is best to physically remove the object whenever possible.
Dark-colored objects absorb most of the visible light, however, it does not mean that they absorb the IR lights as well. Therefore, the color of the material is not a good way of determining whether an object will be visible within the IR spectrum. Some materials will look dark to human eyes but appear bright white on the IR cameras. If these items are placed within the tracking volume, they could introduce extraneous reconstructions.
Since you already have the IR cameras in hand, use one of the cameras to check whether there are IR white materials within the volume. If there are, move them out of the volume or cover them up.
Remove any unnecessary obstacles out of the capture volume since they could block cameras' view and prevent them from tracking the markers. Leave only the items that are necessary for the capture.
Remove reflective objects nearby or within the setup area since IR illumination from the cameras could be reflected by them. You can also use non-reflective tapes to cover the reflective parts.
Prime 41 and Prime 17W cameras are equipped with powerful IR LED rings which enables tracking outdoors, even under the presence of some extraneous IR lights. The strong illumination from the Prime 41 cameras allows a mocap system to better distinguish marker reflections from extraneous illuminations. System settings and camera placements may need to be adjusted for outdoor tracking applications.
Please read through the Outdoor Tracking Setup page for more information.
This page provides guidelines and recommendations to consider when cabling and wiring USB-based and/or Ethernet-based OptiTrack motion capture system.
An Ethernet camera system networks via Ethernet cables. Ethernet-based camera models include PrimeX series (PrimeX 13, 13W, 22, 41), SlimX 13, and Prime Color models. Ethernet cables not only offer faster data transfer rates, but they also provide power over Ethernet to each camera while transferring the data to the host PC. This reduces the number of cables required and simplifies the overall setup. Furthermore, Ethernet cables have much longer length capability (up to 100m), allowing the systems to cover large volumes.
Ethernet cameras connect to the host computer through a Gigabit (1000 Mb/second) Ethernet port. Note: the camera network should be segmented from the office or other local area networks to avoid interference and congestion. If the computer used for capture is connected to an existing network, then a second Ethernet port or add-on network card can be used to connect the camera network. When the camera network is not isolated, frame drops may occur.
The camera network should be segmented from the office or other local area networks to avoid interference and congestion. If the computer used for capture is connected to an existing network, then a second Ethernet port or add-on network card can be used to connect the camera network. When the camera network is not isolated, frame drops may occur.
You'll want to turn off your Windows firewalls on your camera network. Leaving them enabled can cause connection issues and frame drops.
To turn off your Windows firewall please follow the steps below:
Navigate to Control Panel > System and Security > Windows Defender Firewall
Find where the camera network is located in the network groups. Typically your camera network will be labeled 'Unidentified Network' and located under the Guest or public networks.
Once verified as to which network group your camera network is on, select Turn Windows Defender Firewall on or off in the sidebar.
From this window select Turn off Windows Defender Firewall for the network group that your camera network is on.
Click OK.
After you click OK, the window will revert back to the main firewall page. You can verify that this change has been made if the network group you selected has a red 'x' shield icon next to it.
You can close this window and continue setting up your camera network.
It is recommended to only change Advanced Firewall settings under the guidance of a Support Engineer or your organization's IT department. Some settings can cause breaches in security if not done correctly. Please contact our Support team if you are having connectivity issues.
Cable Type
There are multiple categories for Ethernet cables, with different specifications for maximum data transmission rate and cable length. For an Ethernet based system, Cat6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above — are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered on in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
Host PC with an isolated network
Ethernet Cameras
Ethernet cables
Ethernet PoE/PoE+ Switches
Uplink switch (for large camera count setup)
The eSync (optional for synchronizations)
OptiTrack’s Ethernet cameras require PoE or PoE+ Gigabit Ethernet switches, depending on the camera's power requirement. The switch serves two functions: transfer camera data to a host PC, and supply power to each camera over the Ethernet cable (PoE). The switch must provide consistent power to every port simultaneously in order to power each camera. Standard PoE switches must provide a full 15.4 watts to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color cameras have stronger IR strobes which require higher power for the maximum performance. In this case, these cameras need to be routed through PoE+ switches that provide a full 30 watts of power to each port simultaneously. Note that PoE Midspan devices or power injectors are not suitable for Ethernet camera systems.
The following is generally used for large PoE+ camera setups with multiple camera switches. Please refer to the Switch Power Budget and Camera Power Requirements tab above for more information.
Some switches are only allotted a power budget smaller than what is needed depending on which OptiTrack cameras are being used. For larger camera setups this can cause multiple switches that can only use a portion of its available ports. In this case, we recommend an Redundant Power System (RPS) to extend the power budget of your switch. For example, a 24-port switch may have a 370W power budget which only supports 12 PoE+ cameras that require 30W to power. If, however, you have the same 24-port switch with a RPS, you can now power all 24 PoE+ cameras with a 30W power requirement utilizing all 24 of the PoE ports on the switch.
The eSync is used to enable synchronization and timecode in Ethernet-based mocap systems. Only one device is needed per system, and it enables you to link the system to almost any signal source. It has multiple synchronization ports which allow integrating external signals from other devices. When an eSync is used, it is considered as the master in the synchronization chain.
With large camera system setups, you should connect the eSync onto the aggregator switch via a standard Ethernet port for more stable camera synchronization. If PoE is not supported on the aggregator switch, the sync hub will need to be powered separately from a power outlet.
If the number of cameras included in the system exceeds the number of ports available from the switch, a star topology setup with an uplink switch connecting subsequent switches will be required. In this case, large amounts of data will be transferred through the uplink switch. In order to cope high bandwidth, it is recommended use the 10 Gigabit uplink switch and connect to the host PC with a 10 Gigabit cable – Cat6a or above. Otherwise, system latency can increase and frame drops may occur.
A USB camera system provides high-quality motion capture for small to medium size volumes at an affordable price range. USB camera models include the Flex series (Flex 3 and Flex 13) and Slim 3U models. USB cameras are powered by the OptiHub, which is designed to maximize the capacity of Flex series cameras by providing sufficient power to each camera, allowing tracking at long ranges.
For each USB system, up to four OptiHubs can be used. When incorporating multiple OptiHubs in the system, use RCA synchronization cables to interconnect each hub. A USB system is not suitable for a large volume setup because the USB 2.0 cables used to wire the cameras have a 5-meter length limitation.
If needed, up to two active USB extensions can be used when connecting the OptiHub to the host PC. However, the extensions should not be used between the OptiHub and the cameras. We do not support using more than 2 USB extensions anywhere on a USB 2.0 system running Motive.
Main Components
Host PC
USB Cameras
OptiHub(s) and a power supply for each hub.
USB 2.0 cables:
USB 2.0 Type A/B per OptiHub.
USB 2.0 Type B/mini-b per camera.
OptiHub
The OptiHub is a custom-engineered USB hub that is designed to be incorporated in a USB camera system. It provides both power and external synchronization options. Standard USB ports do not provide enough power for the IR illumination within Flex 13 cameras and they need to be routed through an OptiHub in order to activate the LED array.
USB Load Balancing
When connecting hubs to the computer, load balancing becomes important. Most computers have several USB ports on the front and back, all of which go through two USB controllers. Especially for a large camera count systems (18+ cameras), it is recommended that you evenly split the cameras between the USB controllers to make the best use of the available bandwidth.
OptiSync
OptiSync is a custom synchronization protocol which allows sending the synchronization signals through the USB cable. It allows each camera to have one USB cable for both data transfer and synchronization instead of having separate USB and daisy-chained RCA synchronization cables as in the older models.
Difference Between OptiSync and Wired Sync
OptiSync
The OptiSync is a custom camera-to-camera synchronization protocol designed for Flex series cameras. The OptiSync protocol sends and receives sync signals over the USB cable, without the need for RCA sync cables. This sync method is only available when using Flex 3 or Flex 13 cameras connected to the OptiHub.
Wired Sync
The Wired Sync is a camera-to-camera synchronization protocol using RCA cables in a daisy chain arrangement. With a master RCA sync cable connecting the master camera to the OptiHub, each camera in the system is connected in series via RCA sync cables and splitters. The V100:R1 (Legacy) and the Slim 3U cameras utilize Wired Sync only, and therefore any OptiTrack system containing these cameras need to be synchronized through the Wired Sync. Wired Sync is optionally available for Flex 3 cameras.
At this point, all of the connected cameras will be listed on the Devices pane and the 3D viewport when you start up Motive. Check to make sure all of the connected cameras are properly listed in Motive.
Then, open up the Status Log panel and check there are no 2D frame drops. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, it indicates there is a problem with how the system is delivering the camera data. Please refer to the troubleshooting section for more details.
Choosing an appropriate camera mounting solution is very important when setting up a capture volume. A stable setup not only prevents camera damage from unexpected collisions, but it also maintains calibration quality throughout capture. All OptiTrack cameras have ¼-20 UNC Threaded holes – ¼ inch diameter, 20 threads/inch – which is the industry standard for mounting cameras. Before planning the mount structures, make sure that you have optimized your camera placement plans.
Due to thermal expansion issues when mounted to walls, we recommend using Trusses or Tripods as primary mounting structures.
Trusses will offer the most stability and are less prone to unwanted camera movement for more accurate tracking.
Tripods alternatively, offer more mobility to change the capture volume.
Wall Mounts and Speed Rails offer the ability to maximize space, but are the most susceptible to vibration from HVAC systems, thermal expansion, earthquake resistant buildings, etc. This vibration can cause inaccurate calibration and tracking.
Camera clamps are used to fasten cameras onto stable mounting structures, such as a truss system, wall mounts, speed rails, or large tripods. There are some considerations when choosing a clamp for each camera. Most importantly, the clamps need to be able to bear the camera weight. Also, we recommend using clamps that offer adjustment of all 3 degrees of orientation: pitch, yaw, and roll. The stability of your mounting structure and the placement of each camera is very important for the quality of the mocap data, and as such we recommend using one of the mounting structures suggested in this page.
Here at OptiTrack, we recommend and provide Manfrotto clamps that have been tested and verified to ensure a solid hold on cameras and mounting structures. If you would like more information regarding Manfrotto clamps, please visit our Mounts and Tripods page on our website or reach out to our Sales team.
Manfrotto clamps come in three parts:
Manfrotto 035 Super Clamp
Manfrotto 056 3-Way, Pan-and-Tilt Head with 1/4"-20 Mount
Reversible Short Brass Stud
For proper assembly, please follow the steps below:
Place the brass stud into the 16mm hexagon socket in the Manfrotto Super Clamp.
Depress the spring-loaded button so the brass stud will lock into place.
Tighten the safety pin mechanism to secure the brass stud within the hexagon socket. Be sure that the 3/8″ screw (larger) end of the stud is facing out.
From here, attach the Super Clamp to the 3-Way, Pan-and-Tilt Head by screwing in the brass stud into the screw hole of the 3-Way, Pan-and-Tilt Head.
Be sure to tighten these two components fairly tight as you don't want them to swivel when installing cameras. It helps to first tighten the 360° swivel on the 3-Way, Pan-and-Tilt Head as this will ensure that any unwanted swivel will not occur when tightening the two components together.
Once, these two components are attached you should have a fully functioning clamp to attach your cameras to.
Large scale mounting structures, such as trusses and wall mounts, are the most stable and can be used to reliably cover larger volumes. Cameras are well-fixed and the need for recalibration is reduced. However, they are not easily portable and cannot be easily adjusted. On the other hand, smaller mounting structures, such as tripods and C-clamps, are more portable, simple to setup, and can be easily adjusted if needed. However, they are less stable and more vulnerable to external impacts, which can distort the camera position and the calibration. Choosing your mounting structure depends on the capture environment, the size of the volume, and the purpose of capture. You can use a combination of both methods as needed for unique applications.
Choosing an appropriate structure is critical in preparing the capture volume, and we recommend our customers consult our Sales Engineers for planning a layout for the camera mount setup.
A truss system provides a sturdy structure and a customizable layout that can cover diverse capture volume sizes, ranging from a small volume to a very large volume. Cameras are mounted on the truss beam using the camera clamps.
Consult with the truss system provider or our Sales Engineers for setting up the truss system.
Follow the truss installation instruction and assemble the trusses on-site, and use the fastening pins to secure each truss segment.
Fasten the base truss to the ground.
Connect each of the segments and fix them by inserting a fastening pin.
Attach clamps to the cameras.
Mount the clamps to the truss beam.
Aim each camera.
Tripods are portable and simple to install, and they are not restricted to the environment constraints. There are various sizes and types of tripods for different applications. In order to ensure its stability, each tripod needs to be installed on a hard surface (e.g. concrete). Usually, one camera is attached per tripod, but camera clamps can be used in combination to fasten multiple cameras along the leg as long as the tripod is stable enough to bear the weight. Note that tripod setups are less stable and vulnerable to physical impacts. Any camera movements after calibration will distort the calibration quality, and the volume will need to be re-calibrated.
Wall mounts and speed rails are used with camera clamps to mount the cameras along the wall of the capture volume. This setup is very stable, and it has a low chance of getting interfered with by way of physical contact. The capture volume size and layout will depend on the size of the room. However, note that the wall, or the building itself, may slightly fluctuate due to the changing ambient temperature throughout the day. Therefore, you may need to routinely re-calibrate the volume if you are looking for precise measurements.
Below are recommended steps when installing speed rails onto different types of wall material. However, depending on your space, you may require alternative methods.
Although we have instructions below for installing speed rails, we highly recommend leaving the installation to qualified contractors.
General Tools
Cordless drill
Socket driver bits for drill
Various drill bits
Hex head Allen wrench set
Laser level
Speed Rail Parts
Pre-cut rails
Internal locking splice
5" offset wall mount bracket
End caps (should already be pre-installed onto pipes)
Elbow speed rail bracket (optional)
Tee speed rail bracket (optional)
Wood Stud Setup
Wood frame studs behind drywall requires:
Pre-drilled holes.
2 1/2" long x 5/16" hex head wood lag screws.
Metal Stud Framing Setup
Metal stud framing behind drywall requires:
Undersized pre-drilled holes as a marker in the drywall.
2"long x 5/16" self tapping metal screws with hex head.
Metal studs can strip easily if pre-drilled hole is too large.
Concrete Block/Wall Setup
Requires:
Pre-drilled holes.
Concrete anchors inserted into pre-drilled hole.
2 1/2" concrete lags.
Concrete anchors and lags must match for a proper fit.
It's easiest and safest to install with another person rather than installing by a single person and especially necessary when rails have been pre-inserted into brackets prior to installing on a wall.
Pre-drill bracket locations.
If working in a smaller space, slip speed rails into brackets prior to installing.
Install all brackets by the top lag first.
Check to see if all are correctly spaced and level.
Install bottom lags.
Slip speed rails into brackets.
Set screw and internal locking splice of speed rail.
Attach clamps to the cameras.
Attach the clamps to the rail.
Aim each camera.
Helpful Tips/Additional Information
The 5" offset wall brackets should not exceed 4' between each bracket.
Speed rails are shipped no longer than 8'.
Using blue painter's tape is a simple way to mark placement without messing up paint.
Make sure to slide the end of the speed rail without the end cap in first. If installed with the end-cap end first it will "mushroom" the end and make it difficult to slip brackets onto the speed rail.
Check brackets for any burs/sharpness and gently sand off to avoid the bracket scratching the finish on the speed rail.
To further reduce the bracket scratching the finish on the speed rail, use a piece of paper inside the bracket prior to sliding the speed rail through.
In order to ensure that every camera in a mocap system takes full advantage of its capability, they need to be focused and aimed at the target tracking volume. This page includes detailed instructions on how to adjust the focus and aim of each camera for an optimal motion capture. OptiTrack cameras are focused at infinity by default, which is generally sufficient for common tracking applications. However, we recommend users to always double-check the camera view and make sure the captured images are focused when first setting up the system. Obtaining best quality image is very important as 3D data is derived from the captured images.
Make sure that the is appropriate for your application.
Pick a camera to adjust the aim and focus.
Set the camera to the raw grayscale video mode (in Motive) and increase the camera exposure to capture the brightest image (These steps are accomplished by the on featured cameras).
Place one or more reflective markers in the tracking volume.
Carefully adjust the camera angle while monitoring the Camera Preview so that the desired capture volume is included within the camera coverage.
Within the in Motive, zoom in on one of the markers so that it fills the frame.
Adjust the focus (detailed instruction given below) so that the captured image is resolved as clearly as possible.
Repeat above steps for other cameras in the system.
Adjusting aim with a single person can be difficult because the user will have to run back and forth from the camera and the host PC in order to adjust the camera angle and monitor the 2D view at the same time. OptiTrack cameras featuring the Aim Assist button (Prime series and Flex 13) makes this aiming process easier. With just one button-click, the user can set the camera to the grayscale mode and the exposure value to its optimal setting for adjusting both aim and focus. Fit the capture volume within the vertical and horizontal range shown by the virtual crosshairs that appear when Aim Assist mode is on. With this feature, the single-user no longer needs to go back to the host PC to choose cameras and change their settings. Settings for Aim Assist buttons are available from pane.
All OptiTrack cameras (except the V120:Duo/Trio tracking bars) can be re-focused to optimize image clarity at any distance within the tracking range. Change the camera mode to raw grayscale mode and adjust the camera setting, increase exposure and LED setting, to capture the brightest image. Zoom onto one of the reflective markers in the capture volume and check clarity of the image. Then, adjust the camera focus and find the point where the marker image is best resolved. The following images show some examples.
Auto-zoom using Aim Assist button
Double-click on the aim assist button to have the software automatically zoom into a single marker near the center of the camera view. This makes the focusing process a lot easier to accomplish for a single person.
PrimeX 41 and PrimeX 22
For PrimeX 41 and 22 models, camera focus can be adjusted by rotating the focus ring on the lens body, which can be accessed at the center of the camera. The front ring on the lens changes the focus of the camera, and the rear ring adjusts the F-stop of the lens. In most cases, it is beneficial to set the f-stop low to have the aperture at its maximum size for capturing the brightest image. Carefully rotate the focus ring while monitoring the 2D grayscale camera view for image clarity. Once the focus and f-stop have been optimized on the lens, it should be locked down by tightening the set screw. In default configuration, PrimeX 41 cameras are equipped with 12mm F#1.8 lens, and the PrimeX 22 cameras are equipped with 6.8mm F#1.6 lens.
Prime 17W and 41*
For Prime 17W and 41 models, camera focus can be adjusted by rotating the focus ring on the lens body, which can be accessed at the center of the camera. The front ring on the Prime 41 lens changes the focus of the camera, while the rear ring on the Prime 17W adjusts its focus . Set the aperture at its maximum size in order to capture the brightest image. For the Prime 41, the aperture ring is located at the rear of the lens body, where the Prime 17W aperture ring is located at the front. Carefully rotate the focus ring while monitoring the 2D grayscale camera view for image clarity. Align the mark with the infinity symbol when setting the focus back to infinity. Once the focus has been optimized, it should be locked down by tightening the set screw.
*Legacy camera models
PrimeX 13 and 13W, and Prime 13* and 13W*
*Legacy camera models
Slim Series
SlimX 13 cameras also feature M12 lenses. The camera focus can be easily adjusted by rotating the lens without the need to remove the housing. Slim cameras support multiple lens types, including third-party lenses so focus techniques will vary. Refer to the lens type to determine how to proceed. (In general, M12 lenses will be focused by rotating the lens body, while C and CS lenses will be focused by rotating the focus ring).
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
1. Run the Installer
When the download is complete, run the installer to initiate the installation process.
2. Install the USB Driver and Dependencies
If you are installing Motive for the first time, it will prompt you to install the OptiTrack USB Driver. This driver is required for all OptiTrack USB devices including the Hardware Key. You may also need to install other dependencies such as the C++ redistributable and DirectX. After all dependencies have been installed, continue onto installing the Motive.
It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.
3. Install Motive
Follow the installation prompts and install Motive in your desired file directory. We recommend installing the software in the default directory, C:\Program File\OptiTrack\Motive
.
4. OptiTrack Peripheral Module
At the Custom Setup section of the installation process, you will be asked to choose whether to install the Peripheral Devices along with Motive. If you plan to use force plate, NI-DAQ, or EMG devices along with motion capture systems, then make sure the Peripheral Device is installed. If you are not going to be using these devices, you may skip to the next step.
Peripheral Module NI-DAQ
If you decided to install the Peripheral Device, then you will be prompted to install OptiTrack Peripherals Module along with NI-DAQmx driver at the end of Motive installation. Press Yes to install the plugins and the NI-DAQmx driver. This may take a few minutes to install. This only needs to be done one time.
5. Finish Installation
Firewall / Anti-Virus
Make sure all anti-virus software on the Host PC is allowing Motive.
For Ethernet cameras, make sure the windows' firewall is configured to allow the camera network to be recognized. Disabling them entirely is another option.
High-Performance
Windows power saving mode limits CPU usage. In order to best utilize Motive, set this mode to the High Performance mode and remove the limitations. You can configure the High Performance Mode from Control Panel → Hardware and Sound → Power Options
as shown in the image below.
Graphics Card Settings
This is only for computers with integrated graphics.
For computers with integrated graphics, please make sure Motive is set to run on the dedicated graphics card. If the host computer has integrated graphics on the CPU, the PC may switch to using integrated graphics when the computer goes to sleep mode, and when this happens, the viewport may go unresponsive when it exits out of the sleep mode. If you have integrated graphics on the computer, go to the Graphics Settings on Windows, and browse Motive to set it as high-performance graphics.
Once you have installed Motive, the next step is to activate the software using the provided license information and a USB Security Key. Motive activation requires a valid Motive 3.0 license, a USB Security Key, and a computer with access to the Internet.
For Motive 2.x, a USB Hardware Key is required to use the camera system. The Hardware Key stores licensing information and allows you to use a single license to perform different tasks using different computers. Hardware keys are purchased separately. For more information, please see the following page:
There are five different types of Motive licenses: Motive:Body-Unlimited, Motive:Body, Motive:Tracker, Motive:Edit-Unlimited, and Motive:Edit. Each license unlocks different features in the software depending on the use case that the license is intended to facilitate.
The Motive:Body and Motive:Body-Unlimited licenses are intended for either small (up to 3) or large-scale Skeleton tracking applications.
The Motive:Tracker license is intended for real-time Rigid Body tracking applications.
The Motive:Edit and Motive:Edit Unlimited licenses are intended for users modifying data after it has been captured already
Step 1. Launch Motive
First, launch Motive.
Step 2. Activate
The Motive splash screen will pop up and it will indicate that the license is not found. Click and open the license tool and fill out the following fields using provided license information. You will need the License Serial Number and License Hash from your order invoice and the Hardware Key Serial Number indicated on the USB security key or the hardware key. Once you have entered all the information, click Activate. If you have already activated the license before on another machine, make sure the same name is entered when activating.
Online Activation Tool
The Motive License can also be activated from online using the Online License Activation tool. When you use the online License Activation Tool, you will receive the license file via email. In this case, you will have to place the file in the license folder. Once the license file is placed, insert the corresponding USB Hardware Key to use Motive.
Step 3. License File
If Motive is activated properly, license files will be placed in the license folder. This folder can be accessed from the splash screen or by navigating to Start Menu → All Programs → OptiTrack → Motive → OptiTrack License Folder
.
License Folder: C:\ProgramData\OptiTrack\License
Step 4. Hardware Key
If not already done, insert the corresponding Hardware Key that was used to activate the license. The matching security key must be connected to the computer in order to use Motive.
Notes on Connecting the Hardware Key
Connect the Hardware Key to a USB port where the bus does not have a lot of traffic. This is important especially if you have other peripheral devices that connect to the computer via USB ports. If there is too much data flowing through the USB bus used by the Hardware Key, Motive might not be able to connect the cameras.
Make sure USB Hardware Key is plugged in all the way.
About Motive
You can also check the status of the activated license from the About Motive pop-up. This can be accessed in the splash screen when it fails to detect a valid license, or it can be accessed from the Help
``→``
About Motive
menu in Motive.
License Data:
In this panel, you can also export license data into a TXT file by clicking on the License Data.... If you are having any issues with activating Motive, please export and attach the following file in the email.
OptiTrack software can be used on a new computer by reactivating the license, using the same license information. When reactivating, make sure to enter the same name information as before. After the license has been reactivated, the corresponding USB Hardware Key needs to be inserted into the PC in order to verify and run the software.
Another method of using the license is by copying the license file from the old computer to the new computer. The license file can be found in the OptiTrack License folder which can be accessed through the Motive Splash Screen or top Help menu in Motive.
For more information on licensing of Motive, refer to the Licensing FAQs from the OptiTrack website:
For more questions, contact our Support:
When contacting support, please attach the license data (TXT) file exported from the About Motive panel as a reference.
This page includes information on the status indicator lights on the OptiTrack Ethernet cameras.
The PrimeX Series cameras have a front mounted status ring light to indicate the state of the Motive software and firmware updates on the cameras. The following table lists the default ring light color associated with the state of Motive.
Status Ring Light Colors
Color | Status | Description | Can Modify Color | Photo |
---|
On every PrimeX camera there is an additional display in the bottom left corner of the face of the camera.
Bottom Left Display Values
If for any reason you need to change the status ring light you can do so by going into Settings and under General click on the color box next to the status you would like to change. This will bring up a color picker window where you can choose a solid color or choose mutli-color to oscillate between colors. You also have the ability to save a color to your color library to apply it to other statuses.
In order to disable the aim assist button LED on the back of PrimeX cameras, you simply set it to False in Application Settings under General. You can find this under Aim Assist > Aiming Button LED.
The PrimeX Series cameras also have a status indicator on the back panel and indicate the state of the camera only. When changing to a new version of Motive, the camera will need a firmware update in order to communicate to the new version. Firmware updates are automatic when starting Motives. If the camera's firmware updates to a new version of Motive, running an older version of Motive will cause the firmware to necessarily revert back to an older version of firmware. This process is automatic as well.
Back Ring Light Colors
When changing versions of Motive, a firmware update is needed. This process is automatic when opening the software and the status ring light and back ring light show the state, as described in the table above, of the camera during this process. The camera should not be unplugged during a firmware reset or firmware update. Give the camera time to finish this process before turning off the software.
If a camera doesn't update its firmware with the rest of the cameras, it will not get loaded into Motive. Wait for all cameras that are updating to finish, then restart Motive. The cameras that failed to update will now update. This could be caused by miscommunication between the switch when loading in numerous cameras.
Like PrimeX series cameras, SlimX 13 cameras also have a status indicator on the back panel and indicate the state of the camera.
Back Ring Light Colors
Before diving into specific details, let’s begin with a brief overview of Motive. If you are new to using Motive, we recommend you to read through this page and learn about the basic tools, configurations and navigation controls, as well as instructions on managing capture files.
In Motive, the recorded mocap data is stored in a file format called Take (TAK), and multiple Take files can be grouped within a session folder. The is the primary interface for managing capture files in Motive. This pane can be accessed from the icon on the main , and it contains a list of session folders and the corresponding Take files that are recorded or loaded in Motive.
Motive will save and load Motive-specific file formats including the Take files (TAK), camera calibration files (CAL), and Motive user profiles (MOTIVE) that can contain most of the software settings as well as asset definitions for Skeletons and Rigid Body objects. Asset definitions are related to trackable objects in Motive which will be explained further in the and page.
Motive file management is centered on the Take (TAK) file. A TAK file is a single motion capture recording (aka 'take' or 'trial'), which contains all the information necessary to recreate the entire capture from the file, including camera calibration, camera 2D data, reconstructed and labeled 3D data, data edits, solved joint angle data, tracking models (Skeletons, Rigid Bodies), and any additional device data (audio, force plate, etc). A Motive Take (TAK) file is a completely self-contained motion capture recording, and it can be opened by another copy of Motive on another system.
Note:
Take files are forward compatible, but not backwards compatible
BAK files:
If you have any old recordings from Motive 1.7 or below, with BAK file extension, please import these recordings into Motive 2.0 version first and re-save them into TAK file format in order to use it in Motive version 3.0 or above.
Software configurations are saved onto the motive profile (*.motive) files. In the motive profile, all of the application-related configurations, lists of assets, and the loaded session folders are saved and preserved. You can export and import the profiles to easily maintain the same software configurations each time Motive is launched.
All of the currently configured software settings will get saved onto the C:\ProgramData\OptiTrack\MotiveProfile.motive
file periodically throughout capture and when closing out of Motive. This file is the default application profile, and it gets loaded back when Motive is launched again. This allows all of the configurations to be persisted in between different sessions of Motive. If you wish to revert all of the settings to its factory default, use the Reset Application Settings button under the Edit tab of the main command bar.
Motive profiles can also be exported and imported from the File menu of the main command bar. Using the profiles, you can easily transfer and persist Motive configurations among different instances and different computers.
The followings are saved on application profile:
Application Settings
Live Pipeline Settings
Streaming Settings
Synchronization Settings
Export Settings
Rigid Body & Skeleton assets
Rigid Body & Skeleton settings
Labeling settings
Hotkey configurations
A calibration file is a standalone file that contains all of the required information to completely restore a calibrated camera volume, including positions and orientations of each camera, lens distortion parameters, and the camera settings. After a camera system is calibrated, CAL file can be exported and imported back again onto Motive when needed. Thus, it is recommended to save out the camera calibration file after each round of calibration.
Please note that reconstruction settings also get stored in the calibration file; just like how it gets stored in the MOTIVE profile. If the calibration file is imported after the profile file was loaded, it may overwrite the previous reconstruction settings as it gets imported.
The followings are saved on application profile:
Reconstruction settings
Camera settings
Position and orientation of the cameras
Location of the global origin
Lens distortion of each camera
Default System Calibration
The default system calibration gets saved onto the C:\ProgramData\OptiTrack\Motive\System Calibration.cal
file, and it gets loaded automatically at application startup to provide instant access to the 3D volume. This file also gets updated each time calibration is modified or when closing out of Motive.
Use the dropdown menu at the top-left corner to switch into the Perspective View mode. You can also use the number 1 hotkey while on a viewport.
Used to look through the reconstructed 3D representation of the capture, analyze marker positions, rays used in reconstruction, etc.
The context menu in the Perspective View allows you to access more options related to the markers and assets in 3D tracking data.
Use the dropdown menu at the top-left corner to switch into the Camera View mode. You can also use the number 2 hotkey while on a viewport.
Detected IR lights and/or reflections are also shown in this pane. Only the IR lights that satisfy the object filters get considered as markers.
When needed, the viewport can be split into 4 different smaller views. This can be selected from the menu at the top-right corner of the viewport. You can use the hotkeys (Shift + 4) to do this also.
Hotkeys: "Shift + ~" is the default hotkey for toggling between Live and Edit modes in Motive.
Navigate Frames (Alt + Left-click + Drag)
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Panning (Scroll-click + Drag)
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Zooming (Right-click + Drag)
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
Selecting Frame Range (Left-click + Drag)
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Navigate Frames (Left-click)
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Pan View Range
Scroll-click and drag to pan the view range range.
Frame Range Zoom
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
Working Range / Playback range
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range, recorded tracking data will be played back and shown on the graphs. This range can also be used to output a specific frame ranges when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
Selection Range
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both Graph View pane as well as Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the amount of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
If you wish to reset the default application setting, go to Reset Application Settings under the Edit tab.
Solver Settings
Camera Settings
The UI layout in Motive is customizable. All panes can be docked and undocked from the UI. Each pane can be positioned and organized by drag-and-drop using the on-screen docking indicators. Panes may float, dock, or stack. When stacked together, they form a tabbed window for quickly cycling through. Layouts in Motive can be saved and loaded, allowing a user to switch quickly between default and custom configurations suitable for different needs. Motive has preset layouts for Calibration, Creating a Skeleton, Capturing (Record), and Editing workflows. Custom layouts can be created, saved, and set as default from the Main Menu -> 'Layout' menu item. Quickly restore a particular layout from the Layout menu, the Layout Dropdown at the top right of the Main Menu, or via HotKeys.
Note: Layout configurations from Motive versions older than 2.0 cannot be loaded in latest versions of Motive. Please re-create and update the layouts for use.
A: 2D frame drops are logged under the Log pane and it can also be seen in the Devices pane. It will be indicated with a warning sign () next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should occur only momentarily. If the system continues to drop 2D frames, it means there is a problem with receiving the camera data. In many cases, this occurs due to networking problems.
After all the cameras are placed at correct locations, they need to be properly aimed in order to fully utilize its capture coverage. In general, all cameras need to be aimed at the target capture volume where markers will be tracked. While cameras are still attached to the mounting structure, carefully adjust the camera clamp so that the camera field of view (FOV) is directed at the capture region. Refer to 2D camera views from the pane, and ensure that each camera view covers the desired capture region.
PrimeX 13 and PrimeX 13W use M12 lenses and cameras can be focused using custom focus tools to rotate the lens body. Focusing tools can be purchased on , and they clip onto the camera lens and rotates it without opening the camera housing. It could be beneficial to lower the LED illumination to minimize reflections from the adjusting hand.
Recommended | Minimum |
---|
To install Motive, you must first download the Motive installer from our website. Follow the Downloads link under the Support page (), and you will be able to find the newest version of Motive or the previous releases if needed. Both Motive: Body and Motive: Tracker shares same software installer.
After you have completed all steps above Motive will be installed. If you want to use additional plugins, visit the page
For more information on different types of Motive licenses, check the software comparison table on our or in the table below.
License | Motive Edit | Motive Edit Unlimited | Motive Tracker | Motive Body | Motive Body Unlimited |
---|
First of all, if you haven't already done so, make sure you have the software license. If you have successfully activated the license, there should be a license file (DAT) placed under the license folder directory C:\ProgramData\OptiTrack\License
If it's first time using the camera system with the key, make sure the computer has access to the Internet for the camera to go through the initial with the security key.
Display Output | Status |
---|
Color | Status | Description |
---|
Color | Info |
---|
Color | Status | Description |
---|
A Session is a file folder that allows the user to organize multiple similar takes (e.g. Monday, Tuesday, Wednesday, or StaticTrials, WalkingTrials, RunningTrials, etc). Whether you are planning the day's shoot or incorporating a group of Takes mid-project, creating session folders can help manage complex sets of data. In the , you can import session folders that contain multiple Takes or create a new folder to start a new capture session. For a most efficient workflow, plan the mocap session before the capture and organize a list of captures (shots) that need to be completed. Type Take names in a spreadsheet or a text file, and Copy and paste the list, which will automatically create empty Takes (shot list) with corresponding names from the pasted list.
Note that this file is reliable only if the camera setup has remained unchanged since the calibration. Read more from page.
In Motive, the main is fixed at the center of the UI and is used for monitoring the 2D or 3D capture data in both live capture and playback of recorded data. The viewport can be set to either perspective view or camera view. The mode shows the reconstructed 3D data within the calibrated 3D space, and the mode shows 2D images from each camera in the setup. These modes can be selected from the drop-down menu at the top-right corner, and both of these views are essential for assessing and monitoring the tracking data.
Each camera’s view can be accessed from the . It displays the images that are being transmitted from each camera. The image processing modes are displayed, including grayscale and object.
From the Camera Preview pane, you can certain pixel regions to exclude them from the process.
When needed, an additional Viewer pane can be opened under the or by clicking the icon on the main toolbar.
Most of the navigation controls in Motive are customizable, including both mouse and controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Mouse controls in Motive can be customized from the to match your preference. Motive also includes a variety of common mouse control presets so that any new users can easily start controlling Motive. Available preset control profiles include Motive, Blade, Maya, and Visual3D. The following table shows a few basics actions that are commonly used for navigating the viewports in Motive.
Function | Default Control |
---|
Using the Hotkeys can speed up workflows. Most of the default hotkeys are listed on the page. When needed, the hotkeys can also be customized from the application settings panel which can be accessed under the Edit tab. Various actions can be assigned with a custom hotkey using the Hotkey Editor.
The is always docked at the bottom of Motive, and it provides both recording and navigation controls over Motive's two primary operating modes: Live mode and Edit mode.
Switching to Live Mode in Motive using the control deck.
In the Live Mode, all cameras are active and the system is processing camera data. If the mocap system is already calibrated, Motive is live-reconstructing 2D camera data into labeled and unlabeled 3D trajectories (markers) in . The live tracking data can be streamed to other applications using the tools or the NatNet SDK. Also, in Live mode, the system is ready for recording and corresponding capture controls will be available in the .
In the Edit Mode, the cameras are not active, and Motive is processing loaded Take file (pre-recorded data). The playback controls will be available in the control deck, and the small timeline will appear at the top of the control deck for scrubbing through the recorded frames. In this mode, you can review the recorded 3D data from the TAK and make post-processing and/or manually assign marker to the recorded trajectories before out the tracking data. Also, when needed, you can switch to the , and view the real-time reconstructed 3D data to understand how the 3D data was obtained and perform post-processing reconstruction pipeline to re-obtain a new set of 3D data.
The is used for plotting live or recorded channel data in Motive. For example, 3D coordinates of the reconstructed markers, 3D positions and orientations of Rigid Body assets, force plate data, analog data from data acquisition devices, and more can be plotted on this pane. You can switch between existing layouts or create a custom layout for plotting specific channel data.
Basic navigation controls are highlighted below. For more information, read through the page.
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.\
Start and end frames of a working range can also be set from the when in the Edit mode.
The can be accessed under the Edit tab or by clicking the icon on the main toolbar.
This pane is used for configuring application-wide settings, which include startup configurations, display options for both 2D and 3D viewports, settings for asset creation, and most importantly, live-pipeline parameters for the Solver and the 2D Filter settings for the cameras. The Cameras tab includes the 2D filter settings that basically determine which reflections gets considered as marker reflections on the camera views, and the Solver setting determines which 3D markers get reconstructed in the scene from a group of marker reflections from all of the cameras. References for the available settings are documented in the page.
Under the , you can configure a real-time solver engine. These settings, including the trajectorizer settings, are one of the most important settings in Motive. These settings determine how 3D coordinates are acquired from the captured 2D camera images and how they are used for tracking Rigid Bodies and Skeletons. Thus, understanding these settings is very important for optimizing the system for the best tracking results.
Under the , you can configure the 2D Camera filter settings (circularity filter and size filter) as well as other display options for the cameras. The 2D Camera filter setting is one of the key settings for optimizing the capture. For most applications, the default settings work well, but it is still beneficial to understand some of the core settings in order for more efficient control over the camera system.
For more information, read through the page and the
|
|
Live Rigid Bodies | 0 | 0 | Unlimited | Unlimited | Unlimited |
Live Skeletons | 0 | 0 | 0 | Up to 3 | Unlimited |
Edit Rigid Bodies | Unlimited | Unlimited | Unlimited | Unlimited | Unlimited |
Edit Skeletons | Up to 3 | Unlimited | 0 | Up to 3 | Unlimited |
Cycling Numbers | Camera is in the process of updating the firmware. The numbers will start at 0 and increase to 100 indicating that the firmware has completed 100% of the update. |
Constant Number | This is the number of the camera as assigned by Motive. Every time Motive is closed and reopened or a camera is removed from the system, the number will update accordingly. |
'E' | If an 'E' error code appears in the display this means that the camera has lost connection to the network. To troubleshoot this, start by unplugging the camera and plugging it back into the camera switch. Alternatively, you may also try restarting the entire switch to reset the entire network. |
Green | Initialize Phase 1 | Camera is powered and boot loader is running. Preparing to run main firmware. |
Yellow | Initialize Phase 2 | Firmware is running and switch communication in progress. |
Blinking Green (Slow) | Initialize Phase 3 | Switch communication established and awaiting an IP address. |
Cyan | Firmware Loading | Host has initiated firmware upload process. |
Blinking Yellow | Initialize Phase 4 | Camera has fully initialized. In process of synchronizing with camera group or eSync. |
Blinking Green (Fast) | Running | Camera is fully operational and synchronized to the camera group. Ready for data capture. |
Blue | Hibernating | Camera is in a low power state and not sending data. Occurs after closing Motive but leaving the cameras connected to the switch. |
Alternating Red | Firmware Reset | On board flash memory is being reset. |
Alternating Yellow | Firmware Update | Firmware is being written to flash. Numeric display in front will show progress. On completion, the light turns green and camera reboots. |
Blue | Actively sending data and receiving commands when loaded into Motive. |
Green | Camera is sending data to be written to memory or disk. |
None | Camera is operating but Motive is in Edit Mode. |
Yellow | Camera is selected in Motive. |
Orange | Camera is in reference mode. Instead of capturing the marker data, the camera is recording reference video, MJPEG |
Blinking red on start up |
|
Yellow on start up | The camera is attempting to establish a link with the PoE switch. |
Green | Initialize Phase 1 | Camera is powered and boot loader is running. Preparing to run main firmware. |
Yellow | Initialize Phase 2 | Firmware is running and switch communication in progress. |
Blinking Green (Slow) | Initialize Phase 3 | Switch communication established and awaiting an IP address. |
Cyan | Firmware Loading | Host has initiated firmware upload process. |
Blinking Yellow | Initialize Phase 4 | Camera has fully initialized. In process of synchronizing with camera group or eSync2. |
Blinking Green (Fast) | Running | Camera is fully operational and synchronized to the camera group. Ready for data capture. |
Blue | Hibernating | Camera is in a low power state and not sending data. Occurs after closing Motive but leaving the cameras connected to the switch. |
Alternating Red | Firmware Reset | On board flash memory is being reset. |
Alternating Yellow | Firmware Update | Firmware is being written to flash. Numeric display in front will show progress. On completion, the light turns green and camera reboots. |
Rotate view | Right + Drag |
Pan view | Middle (wheel) click + drag |
Zoom in/out | Mouse Wheel |
Select in View | Left mouse click |
Toggle Selection in View | CTRL + left mouse click |
Off | Powered & Awaiting Connection | When camera is first plugged in the LED ring light will be off until it receives commands from Motive and has successfully authenticated via the security key. If it is not successful in connecting to the network, but receiving power it will remain off with a small flashing white dot light in the bottom left corner. | No |
Slow Flashing Cyan, no IR | Idle | Powered and connected to network, but Motive is not running. Two dashes in the bottom left corner will be present in lieu of ID number. | No |
Cyan | Live | Actively sending data and receiving commands when loaded into Motive. | Yes |
White/Off | Masking | When a marker, or what a camera perceives as a marker, is visible to a camera when masking in the Calibration pane, the status light will turn white. When masks are applied and no erroneous marker data is seen, the LEDs turn off and the volume is ready to wand. | No |
Solid Green | Recording | Camera is sending data to be written to memory or disk. | Yes |
Variable Green | Sampling During Calibration | Camera starts out black, then green will appear on the ring light depending on where you have wanded relative to that camera. When the camera starts to take samples, there will be a white light that follows the wand movement rotating around the LED. This will fill in dark green and then light green when enough samples are taken. | No |
Flashing White | Calibration | During calibration when cameras have collected sufficient data they will turn green. Once enough cameras have collected enough samples the left over cameras will flash white indicating they still need to collect more samples for a successful calibration. | No |
None | Playback | Camera is operating but Motive is in Edit Mode. | Yes |
Yellow | Selected | Camera is selected in Motive. | Yes |
Red | Reference | Camera is in reference mode. Instead of capturing the marker data, the camera is recording reference video, Greyscale and MJPEG | Yes |
Cycle Red | Firmware Reset | On board flash memory is being reset. | No |
Cycle Cyan | Firmware Update | For PrimeX cameras. Firmware is being written to flash. On completion, color turns off and camera reboots. | No |
Cycle Yellow | Firmware Update | For Prime cameras. Firmware is being written to flash. On completion, color turns off and camera reboots. | No |
The new Continuous Calibration feature ensures your system always remains optimally calibrated, requiring no user intervention to maintain the tracking quality. It uses highly sophisticated algorithms to evaluate the quality of the calibration and the triangulated marker positions. Whenever the tracking accuracy degrades, Motive will automatically detect and update the calibration and provide the most globally optimized tracking system.
Ease of use. This feature provides much easier user experience because the capture volume will not have to be re-calibrated as often, which will save a lot of time. You can simply enable this feature and have Motive maintain the calibration quality.
Optimal tracking quality. Always maintains the best tracking solution for live camera systems. This ensures that your captured sessions retain the highest quality calibration. If the system receives inadequate information from the environment, the calibration with not update and your system never degrades based on sporadic or spurious data. A moderate increase in the number of real optical tracking markers in the volume and an increase in camera overlap improves the likelihood of a higher quality update.
Works with all camera types. Continuous calibration works with all OptiTrack camera models, including the V120 Tracking bars, the Flex series camera systems, and the Prime series camera systems as well as the Slim13E camera systems for active marker tracking.
For continuous calibration to work as expected, the following criteria must be met:
Markers Must Be Tracked. Continuous calibration looks at tracked reconstructions to assess and update the calibration. Therefore, at least some number of markers must be tracked within the volume.
Majority of Cameras Must See Markers. A majority of cameras in a volume needs to receive some tracking data within a portion of their field of view in order to initiate the calibration process. Because of this, traditional perimeter camera systems typically work the best. Each camera should additionally see at least 4 markers for optimal calibration.
There are two different modes of continuous calibration: Continuous and Continuous + Bumped.
The Continuous mode is used to maintain the calibration quality, and this should be utilized in most cases. In this mode, Motive monitors how well the tracked rays converge onto tracked markers, and it updates the calibration so corresponding tracked rays converge more precisely. This mode is capable of correcting minor degradations that result from ambient influences, such as the thermal expansions on the camera mounting structure.
This mode requires markers to be seen by all of the cameras in the system in order for the calibration to be updated.
The Continuous + Bumped mode combines the continuous calibration refinement described above with the ability to resolve and repair cameras that have been bumped and are no longer contributing to 3D reconstruction. By utilizing this feature, the bumped camera will automatically resolve and be reintroduced into the calibration without requiring the user to perform a manual calibration. For just maintaining overall calibration quality, the Continuous mode should be used instead of the Continuous + Bumped mode.
The continuous calibration can be enabled or disabled in the Application Settings pane under the reconstruction tab. Set the Continuous Calibration setting to Continuous, or Continuous + Bumped to allow the feature to update the system calibration.
The status of continuous calibration can be monitored on the Status Log panel.
Under the Application Settings -> Reconstruction tab, set the continuous calibration to Continuous.
Once enabled, Motive continuously monitors the residual values in captured marker reconstructions. When the residual value increases, Motive will start sampling data for continuous calibration.
Make sure at least some number of markers are being tracked in the volume.
When a sufficient number of samples have been collected, Motive updates the calibration.
When successfully updated, the result will be notified on the Status Log pane.
Duo/Trio Tracking Bars: Duo/ Trio tracking bars can utilize this feature to update the calibration of tracking bars to improve tracking quality.
When a camera is bumped and its orientation have been shifted greatly, the affected camera will no longer properly contribute to the tracking. As a result, there will be a lot of untracked rays generated by this camera.
Under the Application Settings -> Reconstruction tab, set the continuous calibration to Continuous + Bumped Camera.
Make sure there are one or more 3D reconstructed markers in motion within the field of view of the bumped camera.
When a sufficient number of samples have been collected, Motive updates the calibration and the bumped camera will be corrected and the system calibration will be updated.
Check the masking from the 2D Camera Previews. The masks may not be properly placed over the extraneous reflections due to the updated calibration. If so, simply re-mask the extraneous reflections. See: Masking
(Optional) If needed, export the updated calibration into a CAL file.
Do not use continuous calibration for updating calibration with cameras that have been moved significantly or repositioned entirely. While this feature may be able to handle such cases, this is not the intended use. When a camera is moved, you will need to manually calibrate the volume again for the best tracking quality.
Anchor markers can be set up in Motive to further improve continuous calibration. When properly configured, anchor markers improve continuous calibration updates, especially on systems that consist of multiple sets of cameras that are separated into different tracking areas, by obstructions or walls, without camera view overlap. It also provides extra assurance that the global origin will not shift during each update; although the continuous calibration feature itself already checks for this.
Follow the below steps for setting up the active anchor marker in Motive:
Adding Anchor Markers in Motive
First, make sure the entire camera volume is fully calibrated and prepared for marker tracking.
Place any number of markers in the volume to assign them as the anchor markers.
Make sure these markers are securely fixed in place within the volume. It's important that the distances between these markers do not change throughout the continuous calibration updates.
In the 3D viewport, select the markers that are going to be assigned as anchors.
Right-click on the marker to bring up the context menu. Then go to Anchor Markers → Add Selected Markers.
Once markers are added as anchor markers, magenta spheres will appear around the markers indicating the anchors have been set.
Add more anchors as needed, again, it's important that these anchor markers do not move throughout the tracking. Also when the anchor markers need to be reset, whether if the marker was displaced, you can clear the anchor markers and reassign them.
This page provides instructions on how to utilize the Gizmo tool for modifying asset definitions (Rigid Bodies and Skeletons) on the 3D Perspective View of Motive
The gizmo tools allow users to make modifications on reconstructed 3D markers, Rigid Bodies, or Skeletons for both real-time and post-processing of tracking data. This page provides instructions on how to utilize the gizmo tools.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Precise Position/Orientation: When translating or rotating the Rigid Body, you can CTRL + select a 3D reconstruction from the scene to precisely position the pivot point, or align a coordinate axis, directly on, or towards, the selected marker. Multiple reconstructions can be also be selected and their geometrical center (midpoint) will be used as the target reference.
You can utilize the gizmo tools to modify skeleton bone lengths, joint orientations, or scale the spacing of the markers. Translating and rotating the skeleton assets will change how skeleton bone is positioned and oriented with respect to the tracked markers, and thus, any changes in the skeleton definition will affect the realistic representation of the human movement.
The scale tool modifies the size of selected skeleton segments.
The gizmo tools can also be used to edit positions of reconstructed markers.In order to do this, you must be working reconstructed 3D data in post-processing. In live-tracking or 2D mode doing live-reconstruction, marker positions are reconstructed frame-by-frame and it cannot be modified. The Edit Assets must be disabled to do this (Hotkey: T).
Translate
Using the translate tool, 3D positions of reconstructed markers can be modified. Simply click on the markers, turn on the translate tool (Hotkey: W), and move the markers.
Rotate
Using the rotate tool, 3D positions of a group of markers can be rotated at its center. Simply select a group of markers, turn on the rotate tool (Hotkey: E), and rotate them.\
Scale
Using the scale tool, 3D spacing of a group of makers can be scaled. Simply select a group of markers, turn on the scale tool (Hotkey: R) and scale their spacing.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
This page covers basic types of trackable assets in Motive. The assets in Motive are used for both tracking of the objects and labeling of 3D markers in Motive, and they are managed under the Assets pane which can be opened by clicking on the icon. Each type of asset is further explained in the related pages.
Once Motive is prepared, the next step is to place markers on the subject and create corresponding assets. There are three different types of assets in Motive:
Marker Set
Rigid Body
Skeleton
For each Take, involved assets are displayed in the Assets pane, and the related properties show up at the Properties pane when an asset is selected within Motive.
The Marker Set is a list of marker labels that are used to annotate reconstructed markers. Marker Sets should only be used in situations where it is not possible to define a Rigid Body or Skeleton. In this case, the user will manually label markers in post-processing. When doing so, having a defined set of labels (Marker Set) makes this process much easier. Marker Sets within a Take will be listed in the Labels pane, and each label can be assigned through the Labeling process.
Rigid body and Skeleton assets are the Tracking Models. Rigid bodies are created for tracking rigid objects, and Skeleton assets are created for tracking human motions. These assets automatically apply a set of predefined labels to reconstructed trajectories using Motive's tracking and labeling algorithms, and Motive uses the labeled markers to calculate the position and orientation of the Rigid Body or Skeleton Segment. Both Rigid Body and Skeleton tracking data can be sent to other pipelines (e.g. animations and biomechanics) for extended applications. If new Skeletons or Rigid Bodies are created during post-processing, the take will need to be reconstructed and auto-labeled in order to apply the changes to the 3D data.
Assets may be created during both Live (before capture) or Post (after capture, from a loaded TAK) captures.
The Assets pane lists out all assets that are available in the current capture. You can easily copy these assets onto other recorded Take(s) or to the live capture by doing the following:
Copying Assets to a Recorded _Take_
In order to copy and paste assets onto another Take, right-click on the desired Take to bring up the context menu and choose Copy Assets to Takes. This will bring up a dialog window for selecting which assets to move.
Copying Assets to Multiple Recorded _Take(s)_
If you wish to copy assets to multiple Takes, select multiple takes from the Data pane until the desired takes are all highlighted. Repeat the steps you took above for copying a single Take by right-clicking on any of the selected Takes. This should copy the assets you selected to all the selected Takes in the Data pane.
Copying Assets from a Recorded _Take_** to the Live Capture**
If you have a list of assets in a Take that you wish to import into the live capture, you can simply do this by right-clicking on the desired assets on the Assets pane, and selecting Copy Assets to Live.
For selecting multiple items, use Shift-click or Ctrl-click.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
During Calibration process, a calibration square is used to define global coordinate axes as well as the ground plane for the capture volume. Each calibration square has different vertical offset value. When defining the ground plane, Motive will recognize the square and ask user whether to change the value to the matching offset.
Square Type | Descriptions |
---|---|
For Motive 1.7 or higher, Right-Handed Coordinate System is used as the standard, across internal and exported formats and data streams. As a result, Motive 1.7 now interprets the L-Frame differently than previous releases:
OptiTrack motion capture systems can use both passive and active markers as indicators for 3D position and orientation. An appropriate marker setup is essential for both tracking the quality and reliability of captured data. All markers must be properly placed and must remain securely attached to surfaces throughout the capture. If any markers are taken off or moved, they will become unlabeled from the Marker Set and will stop contributing to the tracking of the attached object. In addition to marker placements, marker counts and specifications (sizes, circularity, and reflectivity) also influence the tracking quality. Passive (retroreflective) markers need to have well-maintained retroreflective surfaces in order to fully reflect the IR light back to the camera. Active (LED) markers must be properly configured and synchronized with the system.
OptiTrack cameras track any surfaces covered with retroreflective material, which is designed to reflect incoming light back to its source. IR light emitted from the camera is reflected by passive markers and detected by the camera’s sensor. Then, the captured reflections are used to calculate the 2D marker position, which is used by Motive to compute 3D position through reconstruction. Depending on which markers are used (size, shape, etc.) you may want to adjust the camera filter parameters from the Live Pipeline settings in Application Settings.
The size of markers affects visibility. Larger markers stand out in the camera view and can be tracked at longer distances, but they are less suitable for tracking fine movements or small objects. In contrast, smaller markers are beneficial for precise tracking (e.g. facial tracking and microvolume tracking), but have difficulty being tracked at long distances or in restricted settings and are more likely to be occluded during capture. Choose appropriate marker sizes to optimize the tracking for different applications.
If you wish to track non-spherical retroreflective surfaces, lower the Circularity value in 2D object filter in the application settings. This adjusts the circle filter threshold and non-circular reflections can also be considered as markers. However, keep in mind that this will lower the filtering threshold for extraneous reflections as well. If you wish to track non-spherical retroreflective surfaces, lower the Circularity value from the cameras tab in the application settings.
All markers need to have a well-maintained retroreflective surface. Every marker must satisfy the brightness Threshold defined from the camera properties to be recognized in Motive. Worn markers with damaged retroreflective surfaces will appear to a dimmer image in the camera view, and the tracking may be limited.
Pixel Inspector: You can analyze the brightness of pixels in each camera view by using the pixel inspector, which can be enabled from the Application Settings.
Please contact our Sales team to decide which markers will suit your needs.
OptiTrack cameras can track any surface covered with retro-reflective material. For best results, markers should be completely spherical with a smooth and clean surface. Hemispherical or flat markers (e.g. retro-reflective tape on a flat surface) can be tracked effectively from straight on, but when viewed from an angle, they will produce a less accurate centroid calculation. Hence, non-spherical markers will have a less trackable range of motion when compared to tracking fully spherical markers.
OptiTrack's active solution provides advanced tracking of IR LED markers to accomplish the best tracking results. This allows each marker to be labeled individually. Please refer to the Active Marker Tracking page for more information.
Active (LED) markers can also be tracked with OptiTrack cameras when properly configured. We recommend using OptiTrack’s Ultra Wide Angle 850nm LEDs for active LED tracking applications. If third-party LEDs are used, their illumination wavelength should be at 850nm for best results. Otherwise, light from the LED will be filtered by the band-pass filter.
If your application requires tracking LEDs outside of the 850nm wavelength, the OptiTrack camera should not be equipped with the 850nm band-pass filter, as it will cut off any illumination above or below the 850nm wavelength. An alternative solution is to use the 700nm short-pass filter (for passing illumination in the visible spectrum) and the 800nm long-pass filter (for passing illumination in the IR spectrum). If the camera is not equipped with the filter, the Filter Switcher add-on is available for purchase at our webstore. There are also other important considerations when incorporating active markers in Motive:
Place a spherical diffuser around each LED marker to increase the illumination angle. This will improve the tracking since bare LED bulbs have limited illumination angles due to their narrow beamwidth. Even with wide-angle LEDs, the lighting coverage of bare LED bulbs will be insufficient for the cameras to track the markers at an angle.
If an LED-based marker system will be strobed (to increase range, offset groups of LEDs, etc.), it is important to synchronize their strobes with the camera system. If you require a LED synchronization solution, please contact one of our Sales Engineers to learn more about OptiTrack’s RF-based LED synchronizer.
Many applications that require active LEDs for tracking (e.g. very large setups with long distances from a camera to a marker) will also require active LEDs during calibration to ensure sufficient overlap in-camera samples during the wanding process. We recommend using OptiTrack’s Wireless Active LED Calibration Wand for best results in these types of applications. Please contact one of our Sales Engineers to order this calibration accessory.
Proper marker placement is vital for quality of motion capture data because each marker on a tracked subject is used as indicators for both position and orientation. When an asset (a Rigid Body or Skeleton) is created in Motive, its unique spatial relationships of the markers are calibrated and recorded. Then, the recorded information is used to recognize the markers in the corresponding asset during the auto-labeling process. For best tracking results, when multiple subjects with a similar shape are involved in the capture, it is necessary to offset their marker placements to introduce the asymmetry and avoid the congruency.
Read more about marker placements from the Rigid Body Tracking page and the Skeleton Tracking page.
Asymmetry
Asymmetry is the key to avoiding the congruency for tracking multiple Marker Sets. When there are more than one similar marker arrangements in the volume, marker labels may be confused. Thus, it is beneficial to place segment makers — joint markers must always be placed on anatomical landmarks — in asymmetrical positions for similar Rigid Bodies and Skeletal segments. This provides a clear distinction between two similar arrangements. Furthermore, avoid placing markers in a symmetrical shape within the segment as well. For example, a perfect square marker arrangement will have ambiguous orientation and frequent mislabels may occur throughout the capture. Instead, follow the rule of thumb of placing the less critical markers in asymmetrical arrangements.
Prepare the markers and attach them on the subject, a Rigid Body or a person. Minimize extraneous reflections by covering shiny surfaces with non-reflective tapes. Then, securely attach the markers to the subject using enough adhesives suitable for the surface. There are various types of adhesives and marker bases available on our webstore for attaching the marker: Acrylic, Rubber, Skin adhesive, and Velcro. Multiple types of marker bases are also available: carbon fiber filled bases, Velcro bases, and snap-on plastic bases.
Like in many other measurement systems, calibration is also essential for optical motion capture systems. During camera calibration, the system computes position and orientation of each camera and amounts of distortions in captured images, and they are used constructs a 3D capture volume in Motive. This is done by observing 2D images from multiple synchronized cameras and associating the position of known calibration markers from each camera through triangulation.
Please note that if there is any change in a camera setup over the course of capture, the system will need to be recalibrated to accommodate for changes. Moreover, even if setups are not altered, calibration accuracy may naturally deteriorate over time due to ambient factors, such as fluctuation in temperature and other environmental conditions. Thus, for accurate results, it is recommended to periodically calibrate the system.
Duo/Trio Tracking Bars: The Duo/Trio tracking bars are self-contained and pre-calibrated prior to shipment; therefore, user calibration is not required.
Prepare and optimize the capture volume for setting up a motion capture system.
Apply masks to ignore existing reflections in the camera view. Here, also make sure the calibration tools are hidden from the camera views.
Collect calibration samples through the wanding process.
Review the wanding result and apply calibration.
Set the ground plane to complete the system calibration.
By default, Motive will start up on the calibration layout containing necessary panes for the calibration process. This layout can also be accessed by clicking on a calibration layout from the top-right corner , or by using the Ctrl+1 hotkey.
System settings used for calibration should be kept unchanged. If camera settings are altered after the calibration, the system would potentially need to be recalibrated. To avoid such inconveniences, it is important to optimize both hardware and software setup before the calibration. First, cameras need to be appropriately placed and configured to fully cover the capture volume. Secondly, each camera must be mounted securely so that they remain stationary during capture. Lastly, Motive's camera settings used for calibration should ideally remain unchanged throughout the capture. Re-calibration will be required if there is any significant modifications to the settings that influence the data acquisition, such as camera settings, gain settings, and Filter Switcher settings. If these settings are modified, it is recommended the system be recalibrated.
All extraneous reflections or unnecessary markers are ideally removed from the capture volume before calibration. In fact, the system will refuse to calibrate if there are too many reflections other than the calibration wand present in the camera views. However, in certain situations, unwanted reflections or ambient interference could not be removed from the setup. In this case, these irrelevant reflections can be ignored via using the Masking Tool. This tool applies red masks over the extraneous reflections seen from the 2D camera view, and all of the pixels in the masked regions is entirely filtered out. This is very useful when blocking unwanted reflections that could not be removed from the setup. Use the masking tool to remove any extraneous reflections before proceeding to wanding.
You should be careful when using the masking features because masked pixels are completely filtered from the 2D data. In other words, the data in masked regions will not be collected for computing the 3D data, and excessive use of masking may result in data loss or frequent marker occlusions. Therefore, all removable reflective objects must be taken out or covered before the using the masking tool. After all reflections are removed or masked from the view, proceed onto the wanding process.
The wanding process is the core pipeline that samples calibration data into Motive. A calibration wand is waved in front of the cameras repeatedly, allowing all cameras to see the markers. Through this process, each camera captures sample frames in order to compute their respective position and orientation in the 3D space. There are a number of calibration wands suited for different capture applications.
Active Wanding:
Applying masks to camera views only applies to calibration wands with passive markers. Active calibration wands are capable of calibrating the capture volume while the LEDs of all the cameras are turned off. If the capture has a large amount reflective material that cannot be moved, this method highly recommended.
Under the OptiWands section, specify the wand that you will be using to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly. For example, if you are using CW-500 wand with markers on configuration A, use the 500mm setting. If you are using CW-250 wand, or CW-500 wand with configuration B, please use the 250mm setting.
Set the Calibration Type. If you are calibrating a new capture volume, choose Full Calibration.
Double check the calibration setting. Once confirmed, press Start Wanding to initiate the wanding process.
Start wanding. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Draw figure eight with the wand to collect samples at varying orientations, and cover as much space as possible for sufficient sampling. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples. Wanding trails will be shown in colors on the 2D view. A table displaying the status of the wanding process will show up in the Calibration pane to monitor the progress. For best results, wand the volume evenly and comprehensively throughout the volume, covering both low and high elevations.
After collecting a sufficient number of samples, press the Calculate button under the Calibration section.
After wanding throughout all areas of the volume, consult the each 2D view from the Camera Preview Pane to evaluate individual camera coverage. Each camera should be thoroughly covered with wand samples. If there are any large gaps, attempt to focus wanding on those to increase coverage. When sufficient amounts of calibration samples are collected by each camera, press Calculate in the Calibration Pane, and Motive will start calculating the calibration for the capture volume. Generally, 2,000 - 5,000 samples are enough.
Wanding more than the recommended amount will not necessarily aid in the accuracy of your calibration. Eventually there is a diminishing return with wanding samples, upward ranges of samples can actually cause calibrations to result in poor data.
Wanding Tips
Avoid waving the wand too fast. This may introduce bad samples.
Avoid wearing reflective clothing or accessories while wanding. This can introduce extraneous samples which can negatively affect the calibration result.
Try not to collect samples beyond 10,000. Extra samples could negatively affect the calibration.
Try to collect wanding samples covering different areas of each camera view. The status indicator on Prime cameras can be used to monitor the sample coverage on individual cameras.
Although it is beneficial to collect samples all over the volume, it is sometimes useful to collect more samples in the vicinity of the target regions where more tracking is needed. By doing so, calibration results will have a better accuracy in the specific region.
Marker Labeling Mode
When performing calibration wanding, please make sure the Marker Labeling Mode is set to the default Passive Markers Only setting. This setting can be found under Application Settings: Application Settings → Live-Reconstruction tab → Marker Labeling Mode. There are known problems with wanding in one of the active marker labeling modes. This applies for both passive marker calibration wands and IR LED wands.
For Prime series cameras, the LED indicator ring displays the status of the wanding process. As soon as the wanding is initiated, the LED ring will turn dark, and then green lights will fill up around the ring as the camera collects the sample data from the calibration wand.
Eventually, the ring will be filled with green light when sufficient amount of samples are collected. A single LED will glow blue if the calibration wand is detected by the camera, and the clock position of the blue light will indicate the respective wand location in the Camera Preview pane.
For more information, please visit our Camera Status Indicators documentation page.
After sufficient marker samples have been collected, press Calculate to calibrate using collected samples. The time needed for the calibration calculation varies depending on the number of cameras included in the setup as well as the amount of collected samples.
Immediately after clicking calculate, the samples window will turn into the solver window. It will display the solver stage at the top, followed by the overall result rating and the overall quality selection. The overall result rating is the lowest rating of any one camera in the volume. The overall quality selection shows the current solver quality.
Calibration details can be reviewed for recorded Takes. Select a Take in the Data pane, and related calibration results will be displayed under the Properties pane. This information is available only for Takes recorded in Motive 1.10 and above.
After going through the calculation, a Calibration Result Report will pop up, and detailed information regarding the calibration will be displayed. The Calibration Result is directly related to the mean error, and will update, and the calibration result tiers are (on order from worst to best): Poor, Fair, Good, Great, Excellent, and Exceptional. If the results are acceptable, press Apply to use the result. If not, press cancel and repeat the wanding process. It is recommended to save your calibration file, for later use.
After the calculation has completed, you will see cameras displayed in the 3D view pane of Motive. However, the constructed capture volume in Motive will not be aligned with the coordinate plane yet. This is because the ground plane is not set. If calibration results are acceptable, proceed to setting the ground plane.
The final step of the calibration process is setting the ground plane and the origin. This is accomplished by placing the calibration square in your volume and telling Motive where the calibration square is. Place the calibration square inside the volume where you want the origin to be located and the ground plane to be leveled to. The position and orientation of the calibration square will be referenced for setting the coordinate system in Motive. Align the calibration square so that it references the desired axis orientation.
The longer leg on the calibration square will indicate the positive z axis, and shorter leg will indicate the direction of the positive x axis. Accordingly, the positive y axis will automatically be directed upward in a right-hand coordinate system. Next step is to use the level indicator on the calibration square to ensure the orientation is horizontal to the ground. If any adjustment is needed, rotate the nob beneath the markers to adjust the balance of the calibration square.
If you wish to adjust position and orientation of the global origin after the capture has been taken, you can apply the capture volume translation and rotation from the Calibration pane. After the modification has been applied, new set of 3D data must be reconstructed from the recorded 2D data.
After confirming that the calibration square is properly placed, open the Ground Plane tab from the Calibration Pane. Select the three calibration square markers in the 3D Perspective View. When the markers are selected, press Set Ground Plane to reorient the global coordinate axis in respect to the calibration square. After setting the ground plane, Motive will ask to save the calibration data, CAL.
Duo/Trio Tracking Bars: The global origin of the tracking bars can be adjusted by using a calibration square and the Coordinate System Tools in Motive.
The Vertical Offset setting in the Calibration pane is used to compensate for the offset distance between the center of markers on the calibration square and the actual ground. Defining this value takes account of the offset distance and sets the global origin slightly below the markers. Accordingly, this value should correspond to the actual distance between the center of the marker and the lowest tip at the vertex of the calibration square. When a calibration square is detected in Motive, it will recognize the type of the square used and automatically set the offset value. This setting can also be used when you want to place the ground plane at a specific elevation. A positive offset value will place the plane below the markers, and a negative value will place the plane above the markers.
Ground Plane Refinement feature is used to improve the leveling of the coordinate plane. To refine the ground plane, place several markers with a known radius on the ground, and adjust the vertical offset value to the corresponding radius. You can then select these markers in Motive and press Refine Ground Plane, and it will refine the leveling of the plane using the position data from each marker. This feature is especially useful when establishing a ground plane for a large volume, because the surface may not be perfectly uniform throughout the plane.
Calibration files can be used to preserve calibration results. The information from the calibration is exported or imported via the CAL file format. Calibration files reduce the effort of calibrating the system every time you open Motive. These can also be stored within the project so that it can be loaded whenever a project is accessed. By default, Motive loads the last calibration file that was created, this can be changed via the Application Settings.
Note that whenever there is a change to the system setup, these calibration files will no longer be relevant and the system will need to be recalibrated.
The continuous calibration feature continuously monitors and refines the camera calibration to its best quality. When enabled, minor distortions to the camera system setup can be adjusted automatically without wanding the volume again. In other words, you can calibrate a camera system only once and you will no longer have to worry about external distortions such as vibrations, thermal expansion on camera mounts, or small displacements on the cameras. For detailed information, read through the Continuous Calibration page.
The Continuous Calibration can be enabled under the Reconstruction tab in the Application Settings.
Disabled
Continuous Calibration is disabled.
Continuous
In this mode, the Continuous Calibration is enabled and Motive is continuously optimizing the camera calibration. This mode will accommodate only the minor changes, such as vibrations, thermal expansions, or minor drifts in positions and orientations of the cameras.
Continuous + Bumped Camera
This feature also allows Motive to continuously monitor the system calibration. Unlike the standard Continuous Calibration, this mode can adjust the system calibration to even drastic changes in positions and orientations of cameras. If you have cameras displaced a lot, set this setting to the bumped camera mode and Motive will accommodate for the change and reposition the bumped camera. For maintaining the calibration quality, just use the continuous mode.
When capturing throughout a whole day, temperature fluctuations may degrade calibration quality and you will want to recalibrate the capture volume at different times of the day. However, repeating entire calibration process could be tedious and time-consuming especially with a high camera count setup. In this case, instead of repeating the entire calibration process, you can just record Takes with the wand waves and the calibration square, and use the take to re-calibrate the volume in the post-processing. This offline calibration can save calibration calculation time on the capture day because you can process the recorded wanding take in the post-processing instead. Also, the users can inspect the collected capture data and decide to re-calibrate the recorded Take only when any signs of degraded calibration quality is seen from the captures.
Offline Calibration Steps
1) Capture wanding/ground plane takes. At different times of the day, record wanding Takes that closely resembles the calibration wanding process. Also record corresponding ground plane Takes with calibration square set in the volume for defining the ground plane.
2) Load the recorded Wanding Take. If you wish to re-calibrate the cameras for captured Takes during playback, load the wanding take that was recorded around the same time.
3) Motive: Calibration pane. In the Edit mode, press Start Wanding. The wanding samples from recorded 2D data will be loaded.
4) Motive: Calibration pane. Press Calculate, and wait until the calculation process is complete.
5) Motive: Calibration pane. Apply Result and export the calibration file. File tab → Export Camera Calibration.
6) Load the recorded Ground Plane Take.
7) Open the saved calibration file. With the Ground Plane Take loaded in Motive, open the exported calibration file, and the saved camera calibration will be applied to the ground plane take.
8) Motive: Perspective View. From 2D data of the Ground Plane Take, select the calibration square markers.
9) Motive: Calibration pane: Ground Plane. Set the Ground plane.
10) Motive: Perspective View. Switch back to the Live mode. The recorded Take is now re-calibrated.
Whenever a system is calibrated, a Calibration Wanding file gets saved and it could be used to reproduce the calibration file through the offline calibration process
The partial calibration feature allows you to update the calibration for some selection of cameras in a system. The way this feature works is by updating the position of the selected cameras relative to the already calibrated cameras. This means that you only need to wand in front of the selected cameras as long as there is at least one unselected camera that can also see the wand samples.
This feature is especially helpful for high camera count systems where you only need to adjust a few cameras instead of re-calibrating the whole system. One common way to get into this situation is by bumping into a single camera. Partial calibrations allow you to quickly re-calibrate the single bumped camera that is now out of place. This feature is also useful for those who need to do a calibration without changing the location of the ground plane. The reason the ground plane does not need to be reset is because as long as there is at least one unselected camera Motive can use that camera to retain the position of the ground plane relative to the cameras.
Partial Calibration Steps
From the Devices pane, select the camera that has been moved or added.
Open the Calibration Pane.
Set Calibration Type: In most cases you will want to set this to Full, but if the camera only moved slightly Refine works as well.
Specify the wand type.
From the Calibration Pane, click Start Wanding. A pop-up dialogue will appear indicating that only selected cameras are being calibrated.
Choose Calibrate Selected Cameras from the dialogue window.
Wave the calibration wand mainly within the view of the selected cameras.
Click Calculate. At this point, only the selected cameras will have their calibration updated.
Notes:
This feature relies on the fact that the unselected cameras are in a good calibration state. If the unselected cameras are out of calibration, then using this feature will return bad calibration.
Partial calibration does not update the calibration of unselected cameras. However, the calibration report that Motive provides does include all cameras that received samples, selected or unselected.
The partial calibration process can also be used for adding new cameras onto existing calibration. Use Full calibration type in this case.
The OptiTrack motion capture system is designed to track retro-reflective markers. However, active LED markers can also be tracked with appropriate customization. If you wish to use Active LED markers for capture, the system will ideally need to be calibrated using an active LED wand. Please contact us for more details regarding Active LED tracking.
Once the capture volume is calibrated and all markers are placed, you are now ready to capture Takes. In this page, we will cover key concepts and tips that are important for the recording pipeline. For real-time tracking applications, you can skip this page and read through the page.
There are two different modes in Motive: Live mode and Edit mode. You can toggle between two modes from the or by using the (Shift + ~) hotkey.
Live Mode
The Live mode is mainly used when recording new Takes or when streaming a live capture. In this mode, all of the cameras are continuously capturing 2D images and reconstructing the detected reflections into 3D data in real-time.
Edit Mode
The Edit Mode is used for playback of captured Take files. In this mode, you can playback, or stream, recorded data. Also, captured Takes can be post-processed by fixing mislabeling errors or interpolating the occluded trajectories if needed.
Tip: For Skeleton tracking, always start and end the capture with a T-pose or A-pose, so that the Skeleton assets can be redefined from the recorded data as well.
Tip: Efficient ways of managing Takes
Always start by creating session folders for organizing related Takes. (e.g. name of the tracked subject).
Plan ahead and create a list of captures in a text file or a spreadsheet, and you can create empty takes by copying and pasting the list into the Data Management pane (e.g. walk, jog, run, jump).
Once pasted, empty Takes with the corresponding names will be imported.
Select one of the empty takes and start recording. The capture will be saved with the corresponding name.
When captured successfully, select another empty Take in the list and capture the next one.
2D data: The recorded Take file includes just the 2D object images from each camera.
3D data: The recorded Take file also includes reconstructed 3D marker data in addition to 2D data.
Marker data, labeled or unlabeled, represent the 3D positions of markers. These markers do not present Rigid Body or Skeleton solver calculations but locate the actual marker position calculated from the camera data. These markers are represented as a solid sphere in the viewport. By default, unlabeled markers are colored in white, and labeled markers will have colors that reflect the color setting in the Rigid Body or the corresponding bone.
Labeled Marker Colors:
Colors of the Rigid Body labeled markers can be changed from the properties of the corresponding asset.
Colors of the markers can be changed from the Constraints XML file if needed.
Rigid body markers or bone markers are expected marker positions. They appear as transparent spheres within a rigid body, or a skeleton, and they reflect the position that the rigid body or skeleton solver expects to find a corresponding reconstructed marker. Calculating these positions assumes that the marker is fixed on a rigid segment that doesn’t deform over the course of capture. When the rigid body solver or skeleton solver are correctly tracking reconstructed markers, both marker reconstructions, and expected marker positions will have similar position values and will closely align in the viewport.
When creating rigid bodies, their associated markers will appear as a network of lines between markers. Skeleton marker expected positions would be located next to body segments, or bones. Please see Figure 2. If the marker placement is distorted during capture, the actual marker position will deviate from the expected position. Eventually, the marker may become unlabeled. Figure 1. shows how actual and expected marker positions could align or deviate from each other. Due to the nature of marker-based mocap systems, labeling errors may occur during capture. Thus, understanding each marker type in Motive is very important for correct interpretation of the data.
This page provides some information on aligning a Rigid Body pivot point with a real object replicated 3D model.
When using streamed Rigid Body data to animate a real-life replicate 3D model, the alignment of the pivot point is necessary. In other words, the location of the Rigid Body pivot coincides with the location of the pivot point in the corresponding 3D model. If they are not aligned accurately, the animated motion will not be in a 1:1 ratio compared to the actual motion. This alignment is commonly needed for real-time VR applications where real-life objects are 3D modeled and animated in the scene. The suggested approaches for aligning these pivot points will be discussed on this page.
There are two methods for doing this. Using a measurement probe to sample 3D points to reference from, or simply using a reference grayscale view to align. The first method of creating and using a measurement probe is most accurate and recommended.
Step 1. Create a Rigid Body of the target object
First of all, create a Rigid Body from the markers on the target object. By default, the pivot point of the Rigid Body will be positioned at the geometrical center of the marker placement. Then place the object onto somewhere stable where it will stay stationary.
Step 2. Create a measurement probe.
Step 3. Collect data points to outline the silhouette
Step 4. Attach 3D model
From the sampled 3D points, You can also export markers created from the probe to Maya or other content creation packages to generate models guaranteed to scale correctly.
Step 5. Translate the pivot point
Step 6. Copy transformation values
Step 7. Zero all transformation values in the Attached Geometry section
Once the Rigid Body pivot point has been moved using the Builder pane, zero all of the transformation configurations under the Attached Geometry property for the Rigid Body.
This page explains different types of captured data in Motive. Understanding these types is essential in order to fully utilize the data-processing pipelines in Motive.
There are three different types of data: 2D data, 3D data, and Solved data. Each type of data will be covered in detail throughout this page, but basically, 2D Data is the captured camera frame data, 3D Data is the reconstructed 3-dimensional marker data, and Solved data is the calculated positions and orientations of and segments.
Motive saves tracking data into a Take file (TAK extension), and when a capture is initially recorded, all of the 2D data, real-time reconstructed 3D data, and solved data are saved onto a Take file. Recorded 3D data can be post-processed further in , and when needed, a new set of 3D data can be re-obtained from saved 2D data by performing the reconstruction pipelines. From the 3D data, Solved data can be derived.
Available data types are listed on the . When you open up a Take in , the loaded data type will be highlighted at the top-left corner of the 3D viewport. If available, 3D Data will be loaded first by default, and the 2D data can be accessed by entering the from the Data pane.
2D data is the foundation of motion capture data. It mainly includes the 2D frames captured by each camera in a system.
Recorded 2D data can be reconstructed and auto-labeled to derive the 3D data.
3D tracking data is not computed yet. The tracking data can be exported only after reconstructing the 3D data.
In playback of recorded 2D data, 3D data will be Live-reconstructed into 3D data and reported in the 3D viewport.
Reconstructed 3D marker positions.
Marker labels can be assigned.
Assets are modeled and the tracking information is available.
Record Solved Data
Deleting 3D data for a single _Take_
When frame range is not selected, it will delete 3D data from the entire frame. When a frame range is selected from the Timeline Editor, this will delete 3D data in the selected ranges only.
Deleting 3D data for multiple _Takes_
When a Rigid Body or Skeleton exists in a Take, Solved data can be recorded. From the Assets pane, right-click one or more asset and select Solve from the context menu to calculate the solved data. To delete, simply click Remove Solve.
Deleting labels for a single _Take_
When no frame range is selected, it will unlabel all markers from all Takes. When a frame range is selected from the Timeline Editor, this will unlabel markers in the selected ranges only.
Deleting labels for multiple _Takes_
Even when a frame range is selected from the timeline, it will unlabel all markers from all frame ranges of the selected Takes.
In Motive, Rigid Body assets are used for tracking rigid, unmalleable, objects. A set of markers get securely attached to tracked objects, and respective placement information gets used to identify the object and report 6 Degree of Freedom (6DoF) data. Thus, it's important that the distances between placed markers stay the same throughout the range of motion. Either passive retro-reflective markers or active LED markers can be used to define and track a Rigid Body. This page details instructions on how to create rigid bodies in Motive and other useful features associated with the assets.
A Rigid Body in Motive is a collection of three or more markers on an object that are interconnected to each other with an assumption that the tracked object is unmalleable. More specifically, it assumes that the spatial relationship among the attached markers remains unchanged and the marker-to-marker distance does not deviate beyond the allowable tolerance defined under the corresponding Rigid Body properties. Otherwise, involved markers may become . Cover any reflective surfaces on the Rigid Body with non-reflective materials, and attach the markers on the exterior of the Rigid Body where cameras can easily capture them.
Tip: If you wish to get more accurate 3D orientation data (pitch, roll, and yaw) of a Rigid Body, it is beneficial to spread markers as far as you can within the same Rigid Body. By placing the markers this way, any slight deviation in the orientation will be reflected from small changes in the position.
In a 3D space, a minimum of three coordinates is required for defining a plane using vector relationships; likewise, at least three markers are required to define a Rigid Body in Motive. Whenever possible, it is best to use 4+ markers to create a Rigid Body. Additional markers provide more 3D coordinates for computing positions and orientations of a rigid body, making overall tracking more stable and less vulnerable to marker occlusions. When any of markers are occluded, Motive can reference to other visible markers to solve for the missing data and compute position and orientation of the rigid body.
However, placing too many markers on one Rigid Body is not recommended. When too many markers are placed in close vicinity, markers may overlap on the camera view, and Motive may not resolve individual reflections. This may increase the likelihood of label-swaps during capture. Securely place a sufficient number of markers (usually less than 10) just enough to cover the main frame of the Rigid Body.
Tip: The recommended number of markers per a Rigid Body is 4 ~ 12 markers. Rigid Body cannot be created with more than 20 markers in Motive.
Within a Rigid Body asset, its markers should be placed asymmetrically because this provides a clear distinction of orientations. Avoid placing the markers in symmetrical shapes such as squares, isosceles, or equilateral triangles. Symmetrical arrangements make asset identification difficult, and they may cause the Rigid Body assets to flip during capture.
When tracking multiple objects using passive markers, it is beneficial to create unique Rigid Body assets in Motive. Specifically, you need to place retroreflective markers in a distinctive arrangement between each object, and it will allow Motive to more clearly identify the markers on each Rigid Body throughout capture. In other words, their unique, non-congruent, arrangements work as distinctive identification flags among multiple assets in Motive. This not only reduces processing loads for the Rigid Body solver, but it also improves the tracking stability. Not having unique Rigid Bodies could lead to labeling errors especially when tracking several assets with similar size and shape.
Note for Active Marker Users
What Makes Rigid Bodies Unique?
The key idea of creating unique Rigid Body is to avoid geometrical congruency within multiple Rigid Bodies in Motive.
Unique Marker Arrangement. Each Rigid Body must have a unique, non-congruent, marker placement creating a unique shape when the markers are interconnected.
Unique Marker-to-Marker Distances. When tracking several objects, introducing unique shapes could be difficult. Another solution is to vary Marker-to-marker distances. This will create similar shapes with varying sizes, and make them distinctive from the others.
Unique Marker Counts Adding extra markers is another method of introducing the uniqueness. Extra markers will not only make the Rigid Bodies more distinctive, but they will also provide more options for varying the arrangements to avoid the congruency.
What Happens When Rigid Bodies Are Not Unique?
Multiple Rigid Bodies Tracking
Depending on the object, there could be limitations on marker placements and number of variations of unique placements that could be achieved. The following list provides sample methods for varying unique arrangements when tracking multiple Rigid Bodies.
1. Create Distinctive 2D Arrangements. Create distinctive, non-congruent, marker arrangements as the starting point for producing multiple variations, as shown in the examples above.
2. Vary heights. Use marker bases or posts, with different heights to introduce variations in elevation to create additional unique arrangements.
3. Vary Maximum Marker to Marker Distance. Increase or decrease the overall size of the marker arrangements.
4. Add Two (or more) Markers Lastly, if an additional variation is needed, add extra markers to introduce the uniqueness. We recommended adding at least two extra markers in case any of them is occluded.
A set of markers attached to a rigid object can be grouped and auto-labeled as a Rigid Body. This Rigid Body definition can be utilized in multiple takes to continuously auto-label the same Rigid Body markers. Motive recognizes the unique spatial relationship in the marker arrangement and automatically labels each marker to track the Rigid Body. At least three coordinates are required to define a plane in 3D space, and therefore, a minimum of three markers are essential for creating a Rigid Body.
Step 1.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the Rigid Body from.
Step 3.
Click Create to define a Rigid Body asset from the selected markers.
You can also create a Rigid Body by doing the following actions while the markers are selected:
Perspective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create Rigid Body hotkey (Default: Ctrl +T).
Step 4.
Defining Assets in Edit mode:
Default Properties
Modifying Properties
An existing rigid body can be modified by adding or removing markers using the context menu.
Ctrl + left-click the markers that you wish to add/remove.
Under Rigid Body, choose Add/Remove selected markers to/from rigid body.
If needed, right-click on the rigid body and select Reset Pivot to relocate the pivot point to the new center.
Multiple Rigid Bodies
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Rigid Body tracking data can be either outputted onto a separate file or streamed to client applications in real-time:
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
This feature is supported in Live Mode only.
The Rigid Body refinement tool improves the accuracy of Rigid Body calculation in Motive. When a Rigid Body asset is initially created, Motive references only a single frame for defining the Rigid Body definition. The Rigid Body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the Rigid Body as well as the position and orientation of the Rigid Body itself.
Steps
Select the Rigid Bodies from the Type dropdown menu.
Hold the physical selected Rigid Body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the Rigid Body.
Slowly rotate the Rigid Body to collect samples at different orientations until the progress bar is full.
Once all necessary samples are collected, the refinement results will be displayed.
In Motive, Skeleton assets are used for tracking human motions. These assets auto-label specific sets of markers attached to human subjects, or actors, and create skeletal models. Unlike Rigid Body assets, Skeleton assets require additional calculations to correctly identify and label 3D reconstructed markers on multiple semi-Rigid Body segments. In order to accomplish this, Motive uses pre-defined Skeleton Marker Set templates, which is a collection of marker labels and their specific positions on a subject. According to the selected Marker Set, retroreflective markers must be placed on pre-designated locations of the body. This page details instructions on how to create and use Skeleton assets in Motive.
Example display settings in skeleton assetsNote:
Motive license: Skeleton features are supported only in Motive:Body or Motive:Body - Unlimited.
Skeleton Count: Standard Motive:Body license supports up to 3 Skeletons. For tracking higher number of Skeletons, activate with Motive: Body - Unlimitted license.
Height requirement: For Skeleton tracking, the subject must be between 1'7" ~ 9' 10" tall.
Use the default create layout to open related panels that are necessary for Skeleton creation. (CTRL + 2).
When it comes to tracking human movements, a proper marker placement becomes especially important. Motive utilizes pre-programmed Skeleton Marker Sets, and each marker is used to indicate anatomical landmarks when modeling the Skeleton. Thus, all of the markers must be placed at their appropriate locations. If any of markers are misplaced, the Skeleton asset may not be created, and even if it is created, bad marker placements may lead to problems. Thus, taking extra care in placing the markers on intended locations is very important and can save time in post-processing of the data.
Attaching markers directly onto a person’s skin can be difficult because of hairs, oils, and moistures from sweat. Plus, dynamic human motions tend to move the markers during capture, so use appropriate skin adhesives for securing marker bases onto the skin. Alternatively, mocap suits allow velcro marker bases to be used.
Open and go to the Skeleton creation feature. Select the Marker Set you desire to use from the drop-down menu. A total number of required markers for each Skeleton is indicated in the parenthesis after each Marker Set name, and corresponding marker locations are displayed over an avatar that shows up in the . Instruct the subject to strike a calibration pose (T-pose or A-pose) and carefully follow the figure and place retroreflective markers at corresponding locations of the actor or the subject.
Joint Markers
Joint markers need to be placed carefully along corresponding joint axes. Proper placements will minimize marker movements during a range of motions and will give better tracking results. To accomplish this, ask the subject to flex and extend the joint (e.g. knee) a few times and palpate the joint to locate the corresponding axis. Once the axis is located, attach the markers along the axis where skin movement is minimal during a range of motion.
Wipe out any moisture or oil on the skin before attaching the marker.
Avoid wearing clothing or shoes with reflective materials since they can introduce extraneous reflections.
Tie up hair which can occlude the markers around the neck.
Remove reflective jewelry.
Place markers in an asymmetrical arrangement by offsetting the related segment markers (markers that are not on joints) in slightly different height.
Additional Tips
All markers need to be placed at the respective anatomical landmarks.
Place markers where you can palpate the bone or where there are less soft tissues in between. These spots have fewer skin movements and provide secure marker attachment.
Joint markers are vulnerable to skin movements because of the range of motion in the flexion and extension cycle. In order to minimize the influence, a thorough understanding of the biomechanical model used in the post-processing is necessary. In certain circumstances, the joint line may not be the most appropriate location. Instead, placing the markers slightly superior to the joint line could minimize soft tissue artifact, still taking care to maintain parallelism with the anatomical joint line.
Use appropriate adhesives to place each markers and make sure they are securely attached.
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Next step is to select the Skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the Skeleton. This is set to the T-pose by default.
Step 7.
Step 8.
Click Create to create the Skeleton. Once the Skeleton model has been defined, confirm all Skeleton segments and assigned markers are located at expected locations. If any of the Skeleton segment seems to be misaligned, delete and create the Skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
Virtual Reality Markersets
A proper calibration posture is necessary because the pose of the created Skeleton will be calibrated from it. Read through the following explanations on proper T-poses and A-poses.
T pose
The T-pose is commonly used as the reference pose in 3D animation to bind two characters or assets together. Motive uses this pose when creating Skeletons. A proper T-pose requires straight posture with back straight and head looking directly forward. Both arms are stretched to each side, forming a “T” shape. Both arms and legs must be straight, and both feet need to be aligned parallel to each other.
A pose
Palms Down: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing downwards.
Palms forward: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing forward. Be careful not to over rotate the arm.
Elbows Bent: Similar to all other A-poses. arms approximately 40 degrees, bend elbows so that forearms point towards the front. Palms facing downwards, both forearms aligned.
Calibration markers exists only in the biomechanics Marker Sets.
Many Skeleton Marker Sets do not have medial markers because they can easily collide with other body parts or interfere with the range of motion, all of which increase the chance of marker occlusions.
However, medial markers are beneficial for precisely locating joint axes by associating two markers on the medial and lateral side of a joint. For this reason, some biomechanics Marker Sets use medial markers as calibration markers. Calibration markers are used only when creating Skeletons but removed afterward for the actual capture. These calibration markers are highlighted in red from the 3D view when a Skeleton is first created.
Existing Skeleton assets can be recalibrated using the existing Skeleton information. Basically, the recalibration recreates the selected Skeleton using the same Skeleton Marker Set. This feature recalibrates the Skeleton asset and refreshes expected marker locations on the assets.
Skeleton recalibration do not work with Skeleton templates with added markers.
Skeleton Marker Sets can be modified slightly by adding or removing markers to or from the template. Follow the below steps for adding/removing markers. Note that modifying, especially removing, Skeleton markers is not recommended since changes to default templates may negatively affect the Skeleton tracking when done incorrectly. Removing too many markers may result in poor Skeleton reconstructions while adding too many markers may lead to labeling swaps. If any modification is necessary, try to keep the changes minimal.
To Add
Select a Skeleton segment that you wish to add extra markers onto.
Then, CTRL + left-click on an the marker that we wish to add to the template
On the Asset Model Markers tool in the Builder pane, click + to add and associate the selected marker to the selected segment
Reconstruct and Auto-label the Take.
To Remove
[Optional] Under the advanced properties of the target Skeleton, enable Marker Lines property to view which markers are associated with different Skeleton bones.
Delete the association by clicking on the "-" next to the Asset Model Markers tool in the Builder pane while both the target marker and the target segment is selected.
Reconstruct and Auto-label the Take.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
For biomechanics applications, joint angles must be computed accurately using respective Skeleton model solve, which can be accomplished by using biomechanical analysis software. Export C3D files or stream tracking data from Motive and import them into an analysis software for further calculation. From the analysis, various biomechanics metrics, including the joint angles can be obtained.
To export Skeleton constraints XML file
To import Skeleton constraints XML file
This page provides basic description of marker labels and instructions on labeling workflow in Motive.
Marker Label
Marker labels are basically software name tags that are assigned to trajectories of reconstructed 3D markers so that they can be referenced for tracking individual markers, Rigid Bodies, or Skeletons. Motive identifies marker trajectories using the assigned labels. Labeled trajectories can be exported individually, or combined together to compute positions and orientations of the tracked objects. In most applications, all of the target 3D markers will need to be labeled in Motive. There are two methods for labeling markers in Motive: auto-labeling and manual labeling, and both labeling methods will be covered in this page.
Monitoring Labels
Labeled or unlabeled trajectories can be identified and resolved from the following places in Motive:
There are two approaches to labeling markers in Motive:
Auto-label pipeline: Automatically label sets of Rigid Body markers and Skeleton markers using calibrated asset definitions.
Rigid body and Skeleton asset definitions contain information of marker placements on corresponding assets. This is recorded when the assets are first created, and the auto-labeler in Motive uses them to label a set of reconstructed 3D trajectories that resemble marker arrangements of active assets. Once all of the markers on active assets are successfully labeled, corresponding Rigid Bodies and Skeletons get tracked in the 3D viewport.
The auto-labeler runs in real-time during Live mode and the marker labels get saved onto the recorded TAKs. Running the auto-labeler again in post-processing will basically attempt to label the Rigid Body and Skeleton markers again from the 3D data.
From Data pane
Right-click to bring up the context menu
Click reconstruct and auto-label' to process selected Takes. The this pipeline will create a new set of 3D data and auto-label the markers from it.
This will label all the markers that matches the corresponding asset definition.
Since creating rigid bodies, or skeletons, groups the markers in each set and automatically labels them, Marker Sets are not commonly used in the processing workflow. However, they are still useful for marker-specific tracking applications or when the marker labeling is done in pipelines other than auto-labeling. Also, marker sets are useful when organizing and reassigning the labels.
Under the drop-down menu in the Labels pane, select an asset you wish to label.
All of the involved markers will be displayed under the columns.
From the label list, select unlabeled or mislabeled markers.
Hiding Marker Labels
Labeling Tips
When working with skeleton assets, label the hip segment first. The hip segment is the main parent segment, top of the segments hierarchy, where all other child segments are associated to. Manually assigning hip markers sometimes help the auto-labeler to label the entire asset.
Step 4. Select an asset that you wish to label.
Step 5. From the label columns, Click on a marker label that you wish to re-assign.
Step 6. Inspect behavior of a selected trajectory and its labeling errors and set the appropriate labeling settings (allowable gap size, maximum spike and applied frame ranges).
Step 7. Switch to the QuickLabel mode (Hotkey: D).
Step 9. When all markers have been labeled, switch back to the Select Mode.
Step 1. Start with 2D data of a captured Take with model assets (Skeletons and Rigid Bodies).
Step 3. Examine the reconstructed 3D data, and inspect the frame range where markers are mislabeled.
Step 5. Unlabel all trajectories you want to re-auto-label.
Step 6. Auto-Label the Take again. Only the unlabeled markers will get re-labeled, and all existing labels will be kept the same.
Step 7. Re-examine the marker labels. If some of the labels are still not assigned correctly from any of the frames, repeat the steps 3-6 until complete.
The general process for resolving labeling error is:
Identify the trajectory with the labeling error.
Determine if the error is a swap, an occlusion, or unlabeled.
Resolve the error with the correct tool.
Swap: Use the Swap Fix tool ( Edit Tools ) or just re-assign each label ( Labels panel ).
When manually labeling markers to fix swaps, set appropriate settings for the labeling direction, max spike, and selected range settings.
Occlusion: Use the Gap Fill tool ( Edit Tools ).
Unlabeled: Manually label an unlabeled trajectory with the correct label ( Labels panel ).
When recorded 3D data have been labeled properly and entirely throughout the Take, you will not need to edit marker labels. If you don't have 3D data recorded, you can reconstruct and auto-label the Take to obtain 3D data and label all of the skeleton and rigid body markers. If all of the markers are well reconstructed and there are no significant occlusions, auto-labeled 3D data may be acceptable right away. In this case, you can proceed without post-processing of marker labels.
Recorded 3D data has no gaps in the labels, or the Reconstruct and Auto-label works perfectly the first time without additional post-processing.
Done.
When skeleton markers are mislabeled only within specific frame ranges of a Take, you will have to manually re-label the markers. This may occur when a subject performs dynamic movements or come into contact with another object during the recorded Take. After correcting the mislabeled markers, you can also use the auto-labeler to assign remaining missing labels.
Start with recorded 3D data or Reconstruct and auto-label the Take to obtain newly labeled 3D data.
Inspect the Take to pick out the frame ranges with bad tracking.
Scrub the timeline to a frame just before the bad tracking frame range.
Scrub the timeline to a frame after the bad tracking frame range.
Manually label the same skeleton.
Auto-label the Take.
Check the frames again and correct any remaining mislabels using the Labeling pane.
For Take(s) where skeletons are never perfectly tracked and the markers are consistently mislabeled, you will need to manually assign the correct labels for the skeleton asset(s). Situations like this could happen when the skeleton(s) are never in an easily trackable pose throughout the Take (e.g. captures where the actors are rolling on the ground). It is usually recommended that all skeleton ‘’Takes’’’ start and end with T-pose in order to easily distinguish the skeleton markers.
This also helps the skeleton solver to correctly auto-label the associated markers; however, in some cases, only specific section of a Take needs be trimmed out, or including the calibration poses might not be possible. Manually assigning labels can help the auto-labeler to correctly label markers and have skeletons acquire properly in a Take.
You will get best results if you manually label the entire skeleton, but doing so can be time-consuming. You can also label only the mislabeled segment or the key segment (hip bone) and run the auto-labeler to see if it correctly assigns the labels with the small help.
Start with recorded 3D data or Reconstruct the Take.
Check to see if all markers are correctly assigned throughout the take. If not, re-label or unlabel, any mislabeled markers and run auto-label again if needed.
Marker occlusions can be critical to the auto-labeling process. After having a gap for multiple frames, occluded markers can be unlabeled entirely, or nearby reconstructions can be mistakenly recognized as the occluded marker and result in labeling swaps or mislabels. Skeleton and rigid body asset definitions may accommodate labeling for such occlusions, but in some cases, labeling errors may persist throughout the Take. The following steps can be used to re-assign the labels in this case.
Start with recorded 3D data or Reconstruct and auto-label the Take
Examine through the Take, and go to a frame where markers are mislabeled right after an occlusion.
Using the Quick Label Mode, correct the labeling errors.
Move onto next occluded frames. When the marker reappears, correct the labels.
After correcting the labels, Auto-label the Take again.
The Mask Visible feature in the the Calibration Pane, or in the 2D Camera Preview pane (), automatically detects all of the existing reflections present in the 2D view and masks over them. If desired, masks be manually created by drawing , selecting rectangular regions , or selecting circular regions in the image using the masking tools, or you can also subtract masks by toggling between additive/subtractive masking modes ( add or subtract).
Category | Description |
---|---|
Category | Description |
---|---|
Tip: Prime series cameras will illuminate in blue when in live mode, in green when recording, and turned-off in edit mode. See more at .
Recording in Motive is triggered from the when in the Live mode, and the recorded data
In Motive, capture recording is controlled from the . In the Live mode, new Take** name** can be assigned in the name box or you can just simply start the recording and let Motive automatically generate new names on the fly. You can also create empty Takes in the Data Management pane for a better organization. To start the capture, select Live mode and click the recording button (red). In the control deck, record time and frames are displayed in (Hour:Minute:Second:Frames).
In Motive, all of the recorded capture files are managed through the . Each capture will be saved in a Take (TAK) file, which can be played back in the Edit mode later. Related Take files can be grouped within session folders. Simply create a new folder in the desired directory and load the folder onto the . Currently selected session folder is indicated with the flag symbol (), and all newly recorded Takes will be saved in this folder.
If the capture was unsuccessful, simply record the same Take again and another one will be recorded with a incremented suffix added at the end of the given Take name (e.g. walk_001, walk_002, walk_003). The suffix format is defined in the .
When a capture is first recorded, both 2D data and real-time reconstructed 3D data is saved onto the Take. For more details on each data type, refer to the page.
Throughout capture, you might recognize that there are different types of markers that appear in the . In order to correctly interpret the tracking data, it is important to understand the differences between these markers. There are three different displayed marker types: markers, Rigid Body markers, and bone (or Skeleton) markers.
Colors of the unlabeled markers can be changed from the .
Read through the page for more information on marker labels.
For instructions on creating a measurement probe, please refer to page. You can purchase our probe or create your own. All you need is 4 markers with a static relationship to a projected tip.
Use the created measurement probe to collect that outlines the silhouette of your object. Mark all of the corners and other key features on the object.
After 3D data points have been generated using the probe, attach your game geometry (obj file) to the Rigid Body by turning on the property and importing the geometry under property.
Next step is to translate the 3D model so that the attached model aligns with the silhouette sample that we collected in Step 3. The model can be easily translated and rotated using the . Move, rotate, and scale the asset unit it is aligned with the silhouette.
For accurate alignment, it will be easier to decrease the size of the marker visual. This can be changed from the setting under the application settings panel.
After you have translated, rotated, and scaled the pivot point of the Rigid Body to align the attached 3D model with the sampled data points, the transformation values will be shown under the property.
Copy and paste this transformation parameter onto the Rigid Body location and orientation options under the Edit tab in the . This will translate the pivot point of the Rigid Body in Motive, and align it with the pivot point of the 3D model.
Alternatively, if probe method is not applicable, you can also switch one of the cameras into grayscale view, right click on the camera in the Cameras view and select Make Reference. This will create a Rigid Body overlay in the to align the Rigid Body pivot using the similar approach as above.
Images in recorded 2D data depend on the , also called the video type, of each camera that was selected at the time of the capture. Cameras that were set to reference modes (MJPEG grayscale images) record reference videos, and cameras that were set to tracking modes (object, precision, segment) record 2D object images which can be used in the reconstruction process. The latter 2D object data contains information on x and y centroid positions of the captured reflections as well as their corresponding sizes (in pixels) and roundness, as shown in the below images.
Using the 2D object data along with the camera calibration information, 3D data is computed. Extraneous reflections that fail to satisfy the 2D object filter parameters (defined under ) get filtered out, and only the remaining reflections are processed. The process of converting 2D centroid locations into 3D coordinates is called Reconstruction, which will be covered in the later section of this page.
3D data can be reconstructed either in real-time or in post-capture. For real-time capture, Motive processes captured 2D images on a per-frame basis and streams the 3D data into external pipelines with extremely low processing latency. For recorded captures, the saved 2D data can be used to create a fresh set of 3D data through , and any existing 3D data will be overwritten with the newly reconstructed data.
Contains 2D frames, or 2D object information captured by each camera in a system. 2D data can be monitored from the pane.
3D data contains 3D coordinates of reconstructed markers. 3D markers get reconstructed from 2D data and shows up the perspective view. Each of their trajectories can be monitored in the . In recorded 3D data, marker labels can be assigned to reconstructed markers either through the process using asset definitions or by manually assigning it. From these labeled markers, Motive solves the position and orientation of Rigid Bodies and Skeletons.
Recorded 3D data is editable. Each frame of the trajectory can be deleted or modified. The post-processing can be used to interpolate the missing trajectory gaps or apply the smoothing, and the can be used to assign or reassign the marker labels.
Lastly, from a recorded 3D data, its tracking data can be into various file formats — CSV, C3D, FBX, and more.
can be used to fill the trajectory gaps.
Solved data is positional and rotational, 6 degrees of freedom (DoF), tracking data of and . This data is stored when a TAK is first captured, and it can be removed or recalculated from recorded 3D data. Solved data is fully calculated on all of the recorded frames and if it exists, the real-time Rigid Body and Skeleton solvers do not run during playback. This reduces the amount of processing needed for playback and improves performance.
In the , right-click on a selected asset(s) and click Record Solved Data. Assets that contain solved data will be indicated under the solved column.
In the , right-click on a Take and click Solve All Assets to produce solved data for all of the associated assets. Takes that contain solved data will be indicated under the solved column.
Recorded , audio data, and reference videos can be deleted from a Take file. To do this, open the , right-click on a recorded Take(s), and click the Delete 2D Data from the context menu. Then, a dialogue window will pop-up, asking which types of data to delete. After removing the data, a backup file will be archived into a separate folder.
Deleting 2D data will significantly reduce the size of the Take file. You may want to delete recorded 2D data when there is already a final version of reconstructed 3D data recorded in a Take and the 2D data is no longer needed. However, be aware that deleting removes the most fundamental data from the Take file. After 2D data has been deleted, the action cannot be reverted, and without 2D data, 3D data cannot be again.
Recorded 3D data can be deleted from the context menu in the . To delete 3D data, right-click on selected Takes and click Delete 3D data, and all reconstructed 3D information will be removed from the Take. When you delete the 3D data, all edits and labeling will be deleted as well. Again, a new 3D data can always be reacquired by reconstructing and auto-labeling the Take from 2D data.
When multiple Takes are selected from the , deleting 3D data will remove 3D data from all of the selected Takes. This will remove 3D data from the entire frame ranges.
Assigned marker labels can be deleted from the context menu in the . The Delete Marker Labels feature removes all marker labels from the 3D data of selected Takes. All markers will become unlabeled.
If you are using for tracking multiple Rigid Bodies, it is not required to have unique marker placements. Through the active labeling protocol, active markers can be labeled individually and multiple rigid bodies can be distinguished through uniquely assigned marker labels. Please read through page for more information.
Having multiple non-unique Rigid Bodies may lead to mislabeling errors. However, in Motive, non-unique Rigid Bodies can also be tracked fairly well as long as the non-unique Rigid Bodies are continuously tracked throughout capture. Motive can refer to the trajectory history to identify and associate corresponding Rigid Bodies within different frames. In order to track non-unique Rigid Bodies, you must make sure the Properties → General Settings → Unique setting in of the assets are set to False.
Even though it is possible to track non-unique Rigid Bodies, it is strongly recommended to make each asset unique. Tracking of multiple congruent Rigid Bodies could be lost during capture either by occlusion or by stepping outside of the capture volume. Also, when two non-unique Rigid Bodies are positioned in vicinity and overlap in the scene, their marker labels may get swapped. If this happens, additional efforts will be required for in post-processing of the data.
Select all associated Rigid Body markers in the .
Assets pane: While the markers are selected in Motive, click on the add button in the .
Once the Rigid Body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created Rigid Body will be listed under the .
If the Rigid Bodies, or skeletons, are created in the Edit mode, the corresponding Take needs to be . Only then, the Rigid Body markers will be labeled using the Rigid Body asset and positions and orientations will be computed for each frame. If the 3D data have not been labeled after edits on the recorded data, the asset may not be tracked.
Rigid Body properties consist of various configurations of Rigid Body assets in Motive, and they determine how Rigid Bodies are tracked and displayed in Motive. For more information on each property, read through the page.
When a Rigid Body is first created, default Rigid Body properties are applied to the newly created assets. The default creation properties are configured under the Assets section in the panel.
Properties for existing Rigid Body assets can be changed from the .
First select a rigid body from the or by selecting the pivot point in the .
Left-click on the to open the rigid body context menu.
When multiple rigid bodies are selected, context-menu applies only to the primary rigid body selection only. The primary rigid body is the last rigid body you selected, and its name will show up on the bottom-right corner of the .
Created rigid body definitions can be modified using the editing tools in the or by using the steps covered in the following sections.
The pivot point of a Rigid Body is used to define both position and orientation. When a rigid body is created, its pivot point is be placed at its geometric center by default, and its orientation axis will be aligned with the global coordinate axis. To view the pivot point and the orientation in the 3D viewport, set the Bone Orientation to true under the display settings of a selected Rigid Body in the .
Position and orientation of a tracked Rigid Body can be monitored in real-time from the . You can simply select a Rigid Body in Motive, open the Info pane, and access the Rigid Bodies tool from the to view respective real-time tracking data of the selected Rigid Body.
As mentioned previously, the orientation axis of a Rigid Body, by default, gets aligned with the global axis when the Rigid Body was first created. After a Rigid Body is created, its orientation can be adjusted by editing the Rigid Body orientation using the or by using the GIZMO tools as described in the next section.
There are situations where the desired pivot point location is not at the center of a Rigid Body. The location of a pivot point can be adjusted by assigning it to a marker or by translating along the Rigid Body axis (x,y,z). For most accurate pivot point location, attach a marker on the desired pivot location, set the pivot point to the marker, and apply the translation for precise adjustments. If you are adjusting the pivot point after the capture, in the Edit mode, the Take will need to be again to apply the changes.
Read through the page for detailed information.
To assign the pivot point to a marker, first select the pivot point in the , and CTRL select the marker that you wish to assign to. Then right-click to open the context menu, and in the rigid body section, click Set Pivot Point to Selected Marker.
To translate the pivot point, access the Rigid Body editing tools in the while the Rigid Body is selected. In the Location section, you can input the amount of translation (in mm) that you wish to apply. Note that the translation will be applied along the x/y/z of the Rigid Body orientation axis. Resetting the translation will position the pivot point at the geometric center of the Rigid Body according to its marker positions.
If you wish to reset the pivot point, simply open the Rigid Body context menu in the and click Reset Pivot. The location of the pivot point will be reset back to the center of the Rigid Body again.
This feature is useful when tracking a spherical object (e.g. ball). The Spherical Pivot Placement feature in the Builder pane will assume that all the Rigid Body markers are placed on the surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. To do this, select a Rigid Body, access Modify tab in the , and click Apply from the Spherical Pivot Placement.
Captured 6 DoF Rigid Body data can be exported into CSV, FBX, or BVH files. See:
You can also use one of the streaming plugins or use NatNet client applications to receive tracking data in real-time. See:
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
Select from the toolbar at the top, open the .
In , select an existing Rigid Body asset that you wish to refine from the Assets pane.
Click Start Refine in the .
All markers need to be placed at respective anatomical locations of a selected Skeleton as shown in the . Skeleton markers can be divided into two categories: markers that are placed along joint axes (joint markers) and markers that are placed on body segments (segment markers).
Segment markers are markers that are placed on Skeleton body segments, but not around a joint. For best tracking results, each segment marker placement must be incongruent to an associated segment on the opposite side of the Skeleton (e.g. left thigh and right thigh). Also, segment markers must be placed asymmetrically within each segment for the best tracking results. This helps the Skeleton solve to thoroughly distinguish, left-side and right-side of the corresponding Skeleton segments throughout the capture. This asymmetrical placement is also emphasized in the avatars shown in the Builder pane. Segment markers that can be slightly moved to different places on the same segment is highlighted on the 3D avatar in the Skeleton creation window on the .\
See also:
When using the biomechanics Marker Sets, markers must be placed precisely with extra care because these placements directly relate to coordinate system definition of each respective segment; thus, affecting the resulting biomechanical analysis. The markers need to be placed on the skin for direct representation of the subject’s movement. Mocap suits are not suitable for biomechanic applications. While the basic marker placement must follow the avatar in the Builder pane, additional details on the accurate placements can be found on the following page: .
From the Skeleton creation options on the , select a Skeleton Marker Set template from the Template drop-down menu. This will bring up a Skeleton avatar displaying where the markers need to be placed on the subject.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best Skeleton tracking. Thus, extra attention is needed when placing the .
The magenta markers indicate the that can be placed at a slightly different position within the same segment.
Double-check the marker counts and their placements. It may be easier to use the in Motive to do this. The system should be tracking the attached markers at this point.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the Skeleton markers are not automatically detected, manually select the Skeleton markers from the .
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the Marker Set template. Or, you can also assign custom labels by loading previously prepared files in the label section.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created Skeleton will be calibrated from it. For more details, read the section.
If you are creating a Skeleton in the post-processing of captured data, you will have to the Take to see the Skeleton modeled and tracked in Motive.
Skeleton markersets for VR applications have slightly different setup steps. See:
By configuring , you can modify the display settings as well as Skeleton creation pose settings for Skeleton assets. For newly created Skeletons, default Skeleton creation properties are configured under the pane. Properties of existing, or recorded, Skeleton assets are configured under the while the respective Skeletons are selected in Motive.
The A-pose is another type of calibration pose that is used to create Skeletons. Set the Skeleton Create Pose setting to the A-pose you wish to calibrate with. This pose is especially beneficial for subjects who have restrictions in lifting the arm. Unlike the T-pose, arms are abducted at approximately 40 degrees from the midline of the body, creating an A-shape. There are three different types of A-pose: Palms down, palms forward, and elbows bent.
After creating a Skeleton from the , calibration markers need to be removed. First, detach the calibration markers from the subject. Then, in Motive, right-click on the Skeleton in the perspective view to access the context menu and click Skeleton → Remove Calibration Markers. Check the to make sure that the Skeleton no longer expects markers in the corresponding medial positions.
To recalibrate Skeletons, select all of the associated Skeleton markers from the perspective view and click Recalibrate From Markers which can be found in the Skeleton context menu from either the or the . When using this feature, select a Skeleton and the markers that are related to the corresponding asset.
Skeleton marker colors and marker sticks can be viewed in the pane. They provide color schemes for clearer identification of Skeleton segments and individual marker labels from the perspective viewport. To make them visible, enable the Marker Sticks and Marker Colors under the visual aids in the pane. A default color scheme is assigned when creating a Skeleton asset. To modify marker colors and labels, you can use the .
Constraints basically store information of marker labels, colors, and marker sticks which can be modified, exported and re-imported as needed. For more information on doing this, please refer to the page.
The marker colors and sticks are featured only in Motive 1.10 and above, and skeletons created using Motive versions before 1.10 will not include the colors and sticks. For the Takes recorded before 1.10, the skeleton assets will need to be updated from the by right-clicking onto an asset and selecting Update Markers. The Update Markers feature will apply the default XML template to skeleton skeleton assets.
When adding, or removing, markers in the Edit mode, the Take needs to be again to re-label the Skeleton markers.
You can add or remove from a Rigid Body or a Skeleton using the Builder pane. This is basically adding or removing markers to the existing Rigid Body and/or Skeleton definition. Follow the below steps to add or remove markers:
Access the Modify tab on the .
When you add extra markers to Skeletons, the markers will be labeled as Skeleton_CustomMarker#. You can use the to change the label as needed.
Enable selection of Asset Model Markers from the visual aids option in .
Access the Modify tab on the .
Select the Skeleton segment that you wish to modify and select the associated that you wish to dissociate.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
There are two ways of obtaining Skeleton joint angles. Rough representations of joint angles can be obtained directly from Motive, but the most accurate representations of joint angles can be obtained by pipelining the tracking data into a third-party biomechanics analysis and visualization software (e.g. or ).
Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis. A rough representation of joint angles can be obtained by either exporting or streaming the Skeleton Rigid Body tracking data. When exporting the tracking data into CSV, set the export setting to Local to obtain bone segment position and orientation values in respect to its parental segment, roughly representing the joint angles by comparing two hierarchical coordinate systems. When streaming the data, set to true in the streaming settings to get relative joint angles.
Each Skeleton asset has its marker templates stored in an XML file. By exporting, customizing, and importing the constraint XML files, a Skeleton Marker Set can be modified. Specifically, customizing the XML files will allow you to modify Skeleton marker labels, marker colors, and marker sticks within a Skeleton asset. For detailed instructions on modifying Skeleton XML files, read through page.
To export a Skeleton XML file, right-click on a Skeleton asset under the Assets pane and use the feature to export corresponding Skeleton marker XML file.
You can import marker XML file under the Labels section of the when first creating a new Skeleton. To import a constraints XML file on an existing Skeleton, right-click on a Skeleton asset under the Assets pane and click Import Constraints.
: From the 3D viewport, check the Marker Labels in the visual aids option to view marker labels for selected markers.
: The Labels pane lists out all of the marker labels and corresponding percentage gap for each label. The color of the label also indicates whether if the label is present or missing at the current frame.
: For frames where the selected label is not assigned to any markers, the timeline scrubber gets highlighted in red. Also, the tracks view of this pane provides a list of labels and their continuity in a captured Take.
Manual Label: Manually label individual markers using the .
For tracking Rigid Bodies and Skeletons, Motive can use the to automatically label associated markers both in real-time and post-processing. The auto-labeler uses references assets that are enabled, or assets that are checked in the , to search for a set of markers that matches with the definition and assign pre-defined labels throughout the capture.
There are times, however, when it is necessary to manually label a section or all of a trajectory, either because the markers of a Rigid Body or a Skeleton were misidentified (or unidentified) during capture or because individual markers need to be labeled without using any tracking assets. In these cases, the in Motive is used to perform manual labeling of individual trajectories. Manual labeling workflow is supported only in post-processing of capture when a Take file (TAK) has been loaded with 3D data as its playback type. In case of only capture, the Take must be Reconstructed first in order to assign, or edit, the marker labels in its 3D data. This manual labeling process, along with is typically referred to as post processing of mocap data.
Select Takes from the
The settings for the auto-labeling engine are defined in the section of the Reconstruction pane. The auto-labeler parameters can be modified during post-processing pipelines, and they can be optimized for stable labeling of markers throughout the Take.
Note: Be careful when reconstructing a Take again either by Reconstruct or Reconstruct and Auto-label, because it will overwrite the 3D data and any post-processing edits on trajectories and marker labels will be discarded. Also, for Takes involving Skeleton assets, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again if the Skeletons are never in well-trackable poses throughout the captured Take. This is another reason why you want to start a capture with a calibration pose (e.g. ).
The Marker Set is a type of in Motive. It is the most fundamental method of grouping related markers, and this can be used to manually label individual markers in post-processing of captured data using the . Note that Marker Sets are used for manual labeling only. For automatic labeling during live mode, a Rigid Body asset or a Skeleton asset is necessary.
To create a MarkerSet, click the icon under the and select New Marker Set.
Once a MarkerSet asset is created, its list of labels can be managed using the . First of all, markerset assets must be selected in Motive and the corresponding asset will be listed on the markerset pane. Then, new marker labels can be added by clicking the Icon. If you wish to create multiple marker labels at once, they can added by typing in the labels or copying and pasting a list of labels (a carriage-return delimited) from the windows clipboard onto the pane as shown in the image below..(Press Ctrl+V in the Marker List window).
The is used to assign, remove, and edit marker labels in the . The Tracks View under the can be used in conjunction with the Labels pane to monitor which markers and gaps are associated. The Labels pane is also used to examine the number of occluded gaps in each label, and it can be used along with the for complete post-processing.
Using the Labels pane, you can assign marker labels for each asset (Marker Set, Rigid Body, and Skeleton) via the QuickLabel Mode . The Labels pane also shows a list of labels involved in the Take and their corresponding percent completeness values. The percent completeness values indicate frame percentages of a Take for which the trajectory has been labeled. If the trajectory has no gaps (100% complete), no number will be shown. You can use this pane together with the to quickly locate gaps in a trajectory.
For a given frame, all labels are color-coded. For each frame of 3D data, assigned marker labels are shown in white, labels without reconstructions are shown in red, and unlabeled reconstructions are shown in orange; similar to how they are presented in the .
See the page for detailed explanation on each option.
The QuickLabel mode allows you to tag labels with single-clicks in the view pane, and it is a handy way to reassign or modify marker labels throughout the capture. When the QuickLabel mode is toggled, the mouse cursor switches to a finger icon with the selected label name attached next to it. Also, when the display label option is enabled in the , all of assigned marker labels will be displayed next to each marker in the , as shown in the image below. Select the marker set you wish to label, and tag the appropriate labels to each marker throughout the capture.
When assigning labels using the Quick Label Mode, the labeling scope is configured from the labeling range settings. You can restrict the labeling operation to apply from the current frame backward, current frame forward, or both depending on the trajectory. You may also restrict labeling operations to apply the selected label to all frames in the Take, to a selected frame range, or to a trajectory 'fragment' enclosed by gaps or spikes. The fragment/spike setting is used by default and this best identifies mislabeled frame ranges and assigns marker labels. See the page for details on each feature.
Inspect the behavior of the selected trajectory and decide whether you want to apply the selected label to frames forward or frames backward or both. This option can be selected from on the Labels pane.
Switch to QuickLabeling Mode (Hotkey: D).
In the pane. Assign the selected label to a marker. If the Increment Option () is set under the Labels pane, the label selection in the Labels pane will automatically advance each time you assign them.
After assigning all labels, switch back to normal Select Mode .
If the marker labels are set to visible in the , Motive will show all of the marker labels when entering the QuickLabel mode. To hide all of the marker labels from showing up in the viewport, you can click on the visual aids option in the perspective view, and uncheck marker labels.
The following section provides the general labeling steps in Motive. Note that the labeling workflow is flexible and alternative approaches to the steps listed in this section could also be used. Utilize the auto-labeling pipelines in combination with the to best reconstruct and label the 3D data of your capture.
Use the to monitor occlusion gaps and labeling errors as you post-process capture Takes
When using the , choose the most appropriate labeling setting (all, selected, spike, or fragment) to efficiently label selected trajectories. See more from the Labeling pane page.
can increase the speed of the workflow. Use Z and Shift+Z hotkeys to quickly find gaps in the selected trajectory.
Show/Hide skeleton visibility in the to have a better view on the markers when assigning marker labels.
Toggle skeleton selectability in the perspective view to use the skeleton as a visual aid without it getting in the way of marker data.
Show/Hide skeleton sticks and marker colors under the visual aids in the perspective view options for intuitive identification of labeled markers as you tag through skeleton markers.
For skeleton assets, the property can be utilized to display tracking errors on skeleton segments.
Step 1. In the , Reconstruct and auto-label the take with all of the desired assets enabled.
Step 2. In the , examine the trajectories and navigate to the frame where labeling errors are frequent.
Step 3. Open the .
Step 8. On the , assign the labels onto the corresponding marker reconstructions by clicking on them.
Step 2. Reconstruct and Auto-Label, or just Reconstruct, the Take with all of the desired assets enabled under the . If you use reconstruct only, you can skip step 3 and 5 for the first iteration.
Step 4. Using the , manually fix/assign marker labels, paying attention to your label settings (direction, max gap, max spike, selected duration).
For more data editing options, read through the page.
Following tutorials use Motive 1.10. On Motive 2.0., and is used instead of the Project pane.
Examine the Take(s). Check the Labeling pane, or the , to make sure no occlusion exists within the capture, and all markers are consistently labeled.
If markers are mislabeled during majority of the capture, unlabel all markers from the entire capture by right-clicking on the Take in and click Delete Marker Labels. You can do this on selected frame ranges as well.
Using the , manually label the skeleton. Depending on the severity of the mislabels, you can either label the entire skeleton or just the key segments starting from the hip.
At a certain point of the Take (usually at a frame where you can best identify the pose of the skeleton), use the to manually assign the marker labels for skeletons that are not labeling correctly. Depending on the severity of the mislabels, you can either label the entire skeleton or only the key segments starting from the hip.
After manually assigning the labels, auto-label the Take. Make sure the corresponding assets are enabled in the .
If tracked markers are relatively stationary during the occluded frames, you may want to increase the Maximum Marker Label Gap value under the Auto-Labeler settings in the to allow the occluded marker to maintain its label after auto-labeling the Take. However, note that adjusting this setting will not be useful if the marker is moving dynamically beyond the Prediction Radius (mm) settings during occlusion.
In the Labeling pane, disable the Increment Label Selection option, and select a marker set and a label that is frequently occluded.
In the Labeling pane, disable the Apply Labels to Previous Frames option, and leave only the Apply Labels to Upcoming Frames option enabled.
Use the Fill Gaps tool in the to interpolate the occluded trajectories.
Overall Reprojection
Displays the overall resulting 2D and 3D reprojection error values from the calibration.
Worst Camera
Displays the highest 2D and 3D reprojection error value from the calibration.
Triangulation
Triangulation section displays calibration results on residual offset values. Smaller residual error means more precise reconstructions.
Recommended: Recommended maximum residual offset for point cloud reconstruction.
Residual Mean Error: Average residual error from the calibration.
Overall Wand Error
Displays a mean error value of the detected wand length throughout the wanding process.
Ray Length
Displays a suggested maximum tracking distance, or a ray length, for each camera.
Overall Result
Grades the quality of the calibration result.
Maximum Error (px)
Displays the maximum reprojection error from the calibration.
Minimum Error (px)
Displays the minimum reprojection error from the calibration.
Average Error (px)
Displays the average reprojection error from the calibration.
Wand Error (mm)
Displays a mean error value of the detected wand length throughout the wanding process.
Calculation Time
Displays the total calculation time.
The Edit Tools in Motive enables users to post-process tracking errors from recorded capture data. There are multiple editing methods available, and you need to clearly understand them in order to properly fix errors in captured trajectories. Tracking errors are sometimes inevitable due to the nature of marker-based motion capture systems. Thus, understanding the functionality of the editing tools is essential. Before getting into details, note that the post-editing of the motion capture data often takes a lot of time and effort. All captured frames must be examined precisely and corrections must be made for each error discovered. Furthermore, some of the editing tools implement mathematical modifications to marker trajectories, and these tools may introduce discrepancies if misused. For these reasons, we recommend optimizing the capture setup so that tracking errors are prevented in the first place.
Common tracking errors include marker occlusions and labeling errors. Labeling errors include unlabeled markers, mislabeled markers, and label swaps. Fortunately, label errors can be corrected simply by reassigning proper labels to markers. Markers may be hindered from camera views during capture. In this case, the markers will not be reconstructed into 3D space and introduce a gap in the trajectory, which are referred to as marker occlusions. Marker occlusions are critical because the trajectory data is not collected at all, and retaking the capture could be necessary if the missing marker is significant to the application. For these occluded markers, Edit Tools also provide interpolation pipelines to model the occluded trajectory using other captured data points. Read through this page to understand each of data editing methods in detail.
Steps in Editing
General Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
In some cases, you may wish to delete 3D data for certain markers in a Take file. For example, you may wish to delete corrupt 3D reconstructions or trim out erroneous movements from the 3D data to improve the data quality. In the Edit mode, reconstructed 3D markers can be deleted for selected range of frames. To delete a 3D marker, first select 3D markers that you wish to delete, and press the Delete key, and they will be completely erased from the 3D data. If you wish to delete 3D markers for a specific frame range, open the Graph Pane and select the frame ranges that you wish to delete the markers from, and press the Delete key. The 3D trajectory for the selected markers will be erased for the highlighted frame range.
Note: Deleted 3D data can be recovered by reconstructing and auto-labeling new 3D data from recorded 2D data.
The trimming feature can be used to crop a specific frame range from a Take. For each round of trim, a copied version of the Take will be automatically achieved and backed up into a separate session folder.
Steps for trimming a Take
1) Determine a frame range that you wish to extract.
2) Set the working range (also called as the view range) on the Graph View pane. All other frames outside of this range will be trimmed out. You can set the working range through the following approaches:
Specify the starting frame and ending frame from the navigation bar on the Graph Pane.
3) After zooming into the desired frame range, click Edit > Trim Current Range to trim out the unnecessary frames.
4) A dialog box will pop up asking to confirm the data removal. If you wish to reset the frame numbers upon trimming the take, select the corresponding check box on the pop-up dialog.
The first step in the post-processing is to check for labeling errors. Labels can be lost or mislabeled to irrelevant markers either momentarily or entirely during capture. Especially when the marker placement is not optimized or when there are extraneous reflections, labeling errors may occur. As mentioned in other pages, marker labels are vital when tracking a set of markers, because each label affects how the overall set is represented. Examine through the recorded capture and spot the labeling errors from the perspective view, or by checking the trajectories on the Graph pane for suspicious markers. Use the Labels pane or the Tracks View mode from the Graph pane to monitor unlabeled markers in the Take.
When a marker is unlabeled momentarily, the color of tracked marker switches between white (labeled) and orange (unlabeled) by the default color setting. Mislabeled markers may have large gaps and result in a crooked model and trajectory spikes. First, explore captured frames and find where the label has been misplaced. As long as the target markers are visible, this error can easily be fixed by reassigning the correct labels. Note that this method is preferred over editing tools because it conserves the actual data and avoids approximation.
Read more about labeling markers from the Labeling page.
The Edit Tools provide functionality to modify and clean-up 3D trajectory data after a capture has been taken. multiple post-processing methods are featured in the Edit Tools for different purposes: Trim Tails, Fill Gaps, Smooth, and Swap Fix. The Trim Tails method is used to remove data points in few frames before and after a gap. The Fill Gaps method calculates the missing marker trajectory using interpolation methods. The Smoothing method filters out unwanted noise in the trajectory signal. Finally, the Swapping method switches marker labels for two selected markers. Remember that modifying data using Edit Tools changes the raw trajectories, and an overuse of Edit Tools is not recommended. Read through each method and familiarize yourself with the Editing Tools. Note that you can undo and redo all changes made using Edit Tools.
Frame Range: If you have a certain frame range selected from the timeline, data edits will be applied to the selected range only.
The Tails method trims, or removes, a few data points before and after a gap. Whenever there is a gap in a marker trajectory, slight tracking distortions may be present on each end. For this reason, it is usually beneficial to trim off a small segment (~3 frames) of data. Also, if these distortions are ignored, they may interfere with other editing tools which rely on existing data points. Before trimming trajectory tails, check all gaps to see if the tracking data is distorted. After all, it is better to preserve the raw tracking data as long as they are relevant. Set the appropriate trim settings, and trim out the trajectory on selected or all frame. Each gap must satisfy the gap size threshold value for it to be considered for trimming. Each trajectory segment also needs to satisfy the minimum segment size, otherwise, it will be considered as a gap. Finally, the Trim Size value will determine how many leading and trailing trajectory frames are removed from a gap.
Smart Trim
The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points than others. This feature determines whether each end of the gap is suspicious with errors, and deletes an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Gap filling is the primary method in the data editing pipeline, and this feature is used to remodel the trajectory gaps with interpolated marker positions. This is used to accommodate the occluded markers in the capture. This function runs mathematical modeling to interpolate the occluded marker positions from either the existing trajectories or other markers in the asset. Note that interpolating a large gap is not recommended because approximating too many data points may lead to data inaccuracy.
New to Motive 3.0; For Skeletons and Rigid Bodies only Model Asset Markers can be used to fill individual frames where the marker has been occluded. Model Asset markers must be first enabled on the Properties Pane when the desired asset is selected and then they must be enabled for selection in the Viewport. Now when frames are encountered where the marker is lost from camera view, select the associated Model Asset Marker in the 3D view; right click for the context menu and select 'Set Key'.
First of all, set the Max. Gap Size value and define the maximum frame length for an occlusion to be considered as a gap. If a gap size has a longer frame length, it will not be affected by the filling mechanism. Set a reasonable maximum gap size for the capture after looking through the occluded trajectories. In order to quickly navigate through the trajectory graphs on the Graph Pane for missing data, use the Find Gap features (Find Previous and Find Next) and automatically select a gap frame region so the data could be interpolated. Then, apply the Fill Gaps feature while the gap region is selected. Various interpolation options are available in the setting including Constant, Linear, Cubic, Pattern-based, and Model-based.
There are four different interpolation options offered in Edit Tools: constant, linear, cubic and pattern-based. First three interpolation methods (constant, linear, and cubic) look at the single marker trajectory and attempt to estimate the marker position using the data points before and after the gap. In other words, they attempt to model the gap via applying different degrees of polynomial interpolations. The other two interpolation options (pattern-based and model-based) reference visible markers and models to the estimate occluded marker position.
Constant
Applies zero-degree approximation, assumes that the marker position is stationary and remains the same until the next corresponding label is found.
Linear
Applies first-degree approximation, assuming that the motion is linear, to fill the missing data. Only use this when you are sure that the marker is moving at linear motion.
Cubic
Applies third-degree polynomial interpolation, cubic spline, to fill the missing data in the trajectory.
Pattern based
This refers to trajectories of selected reference markers and assumes the target marker moves along in a similar pattern. The Fill Target marker is specified from the drop-down menu under the Fill Gaps tool. When multiple markers are selected, a Rigid Body relationship is established among them, and the relationship is used to fill the trajectory gaps of the selected Fill Target marker as if they were all attached to a same Rigid Body. The following list is the general workflow for using the Pattern Based interpolation:
Select both reference markers and the target marker to fill.
Examine the trajectory of the target marker from the Graph Pane: Size, range, and a number of gaps.
Set an appropriate Max. Gap Size limit.
Select the Pattern Based interpolation option.
Specify the Fill Target marker in the drop-down menu.
When interpolating for only a specific section of the capture, select the range of frames from Graph pane.
Click the Fill Selected/Fill All/Fill Everything.
Model based
This interpolation is used for filling markers gaps of an asset (skeleton segments or rigid bodies). Model based interpolation refers to the model and corresponding expected marker positions for estimating the trajectory. When using this option on a skeleton asset, the other skeleton markers and related segments determine a reliable location of the marker during the occluded gap. When using this option, simply select a gapped marker within a model, configure the Max. Gap Size value, and apply the interpolation in the desired frame range.
The smoothing feature applies a noise filter (low-pass Butterworth, 4th degree) to trajectory data, and this modifies the marker trajectory smoother. This is a bi-directional filter that does not introduce phase shifts. Using this tool, any vibrating or fluttering movements can be filtered out. First of all, set the cutoff frequency for the filter and define how strongly your data will be smoothed.
When the cutoff frequency is set high, only high-frequency signals are filtered. When the cutoff frequency is low, trajectory signals at a lower frequency range will also be filtered. In other words, a low cutoff frequency setting will smooth most of the transitioning trajectories, whereas high cutoff frequency setting will smooth only the fluttering trajectories.
High-frequency data are present during sharp transitions, and this can also be introduced by signal noise. Commonly used ranges for Filter Cutoff Frequency are between 7 Hz to 12 Hz, but you may want to adjust the value higher for fast and sharp motions to avoid softening motion transitions need to stay sharp.
In some cases, marker labels may be swapped during capture. Swapped labels can result in erratic orientation changes, or crooked skeletons, but they can be corrected by re-labeling the markers. The Swap Fix feature in the Edit Tools can be used to correct obvious swaps that persist through the capture. Select two markers that have their labels swapped, and select the frame range that you wish to edit.
Find Previous and Find Next buttons allow you to navigate to the frame where their position have been changed. If a frame range is not specified, the change will be applied from current frame forward. Finally, switch the marker labels by clicking on the Apply Swap button. As long as both labels are present in the frame and only correction needed is to change the labels, the Swap Fix tool could be utilized to make corrections.
CS-200:
Long arm: Positive z
Short arm: Positive x
Vertical offset: 19 mm
Marker size: 14 mm (diameter)
CS-400: Used for general for common mocap applications. Contains knobs for adjusting the balance as well as slots for aligning with a force plate.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 45 mm
Marker size: 19 mm (diameter)
Legacy L-frame square: Legacy calibration square designed before changing to the Right-hand coordinate system.
Long arm: Positive z
Short arm: Negative x
Motive can export tracking data in BioVision Hierarchy (BVH) file format. Exported BVH files do not include individual marker data. Instead, a selected skeleton is exported using hierarchical segment relationships. In a BVH file, the 3D location of a primary skeleton segment (Hips) is exported, and data on subsequent segments are recorded by using joint angles and segment parameters. Only one skeleton is exported for each BVH file, and it contains the fundamental skeleton definition that is required for characterizing the skeleton in other pipelines.
Notes on relative joint angles generated in Motive: Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis.
General Export Options
Option | Description |
---|---|
BVH Specific Export Options
Option | Description |
---|---|
Tracking data can be exported into the C3D file format. C3D (Coordinate 3D) is a binary file format that is widely used especially in biomechanics and motion study applications. Recorded data from external devices, such as force plates and NI-DAQ devices, will be recorded within exported C3D files. Note that common biomechanics applications use a Z-up right-hand coordinate system, whereas Motive uses a Y-up right-hand coordinate system. More details on coordinate systems are described in the later section. Find more about C3D files from https://www.c3d.org.
General Export Options
Option | Description |
---|---|
C3D Specific Export Options
Options | Descriptions |
---|---|
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
MotionBuilder Compatible Axis Convention
This is a preset convention for exporting C3D files for use in Autodesk MotionBuilder. Even though Motive and MotionBuilder both use the same coordinate system, MotionBuilder assumes biomechanics standards when importing C3D files (negative X axis to positive X axis; positive Z to positive Y; positive Z to positive Y). Accordingly, when exporting C3D files for MotionBuilder use, set the Axis setting to MotionBuilder Compatible, and the axes will be exported using the following convention:
Motive: X axis → Set to negative X → Mobu: X axis
Motive: Y axis → Set to positive Z → Mobu: Y axis
Motive: Z axis → Set to positive Y → Mobu: Z axis
There is an known behavior where importing C3D data with timecode doesn't accurately show up in MotionBuilder. This happens because MotionBuilder sets the subframe counts in the timecode using the playback rate inside MotionBuilder instead of using the rate of the timecode. When this happens you can set the playback rate in MotionBuilder to be the same as the rate of the timecode generator (e.g. 30 Hz) to get correct timecode. This happens only with C3D import in MotionBuilder, FBX import will work fine without the change to the playback rate.
Captured tracking data can be exported in Comma Separated Values (CSV) format. This file format uses comma delimiters to separate multiple values in each row, and it can be imported by spreadsheet software or a programming script. Depending on which data export options are enabled, exported CSV files can contain marker data, Rigid Body data, and/or Skeleton data. CSV export options are listed in the following charts:
CSV Options | Description |
---|---|
The quality stats display the reliability of associated marker data. Errors per marker lists average displacement between detected markers and expected marker locations within corresponding assets. Marker Quality values rate how well camera rays converged when the respective marker was reconstructed. The value varies from 0 (unstable marker) to 1 (accurate marker).
When the header is disabled, this information will be excluded from the CSV files. Instead, the file will have frame IDs in the first column, time data on the second column, and the corresponding mocap data in the remaining columns.
CSV Headers
TIP: Occlusion in the marker data
When there is an occlusion in a marker, the CSV file will contain blank cells. This can interfere when running a script to process the CSV data. It is recommended to optimize the system setup to reduce occlusions. To omit unnecessary frame ranges with frequent marker occlusions, select the frame range with the most complete tracking results. Another solution to this is to use Fill Gaps to interpolate missing trajectories in post-processing.
For Takes containing force plates (AMTI or Bertec) or data acquisition (NI-DAQ) devices, additional CSV files will be exported for each connected device. For example, if you have two force plates and a NI-DAQ device in the setup, total 4 CSV files will be saved when you export the tracking data from Motive. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. Also, mocap frame rate to device sampling rate ratio is included since force plate and analog data are sampled at higher sampling rates.
Please note that device data is usually sampled at a higher rate then the camera system. In this case, camera samples are collected at the center of the device data samples that were collected in this period. For example, if device data has 9 sub-frame for each camera frame sample, the tracking data will be at every 5th of device data.
Force Plate Data: Each of the force plate CSV files will contain basic properties such as platform dimensions and mechanical-to-electrical center offset values. The mocap frame number, force plate sample number, forces (Fx/Fy/Fz), moments (Mx, My, Mz), and location of the center of pressure (Cx, Cy, Cz) will be listed below the header.
Analog Data: Each of the analog data CSV files contains analog voltages from each configured channel.
A Motive Body license can export tracking data into FBX files for use in other 3D pipelines. There are two types of FBX files: Binary FBX and ASCII FBX.
Notes for MotionBuilder Users
When exporting tracking data onto the MotionBuilder in FBX file format, make sure the exported frame rate is supported in MotionBuilder (Mobu). In Mobu, there is a select set of playback frame rate that's supported, and rate of the exported FBX file must agree in order to playback the data properly.
If there is a non-standard frame rate selected that is not supported, the closest supported frame rate is applied.
For more information, please visit Autodesk Motionbuilder's Documentation Support site.
Autodesk has discontinued support for FBX ASCII import in MotionBuilder 2018 and above. For alternatives when working in MotionBuilder, please see the Autodesk MotionBuilder: OptiTrack Optical Plugin page.
Exported FBX files in ASCII format can contain reconstructed marker coordinate data as well as 6 Degree of Freedom data for each involved asset depending on the export setting configurations. ASCII files can also be opened and edited using text editor applications.
FBX ASCII Export Options
Binary FBX files are more compact than ASCII FBX files. Reconstructed 3D marker data is not included within this file type, but selected Skeletons are exported by saving corresponding joint angles and segment lengths. For Rigid Bodies, positions and orientations at the defined Rigid Body origin are exported.
FBX Binary Export Options
Various types of files, including the tracking data, can be exported out from Motive. This page provides information on what file formats can be exported from Motive and instructions on how to export them.
Once captures have been recorded into Take files and the corresponding 3D data have been reconstructed, tracking data can be exported from Motive in various file formats.
Exporting Rigidbody Tracking Data
If the recorded Take includes Rigid Body trackable assets, make sure all of the Rigid Bodies are Solved prior to exporting. The solved data will contain positions and orientations of each Rigid Body.
In the export dialog window, the frame rate, the measurement scale, and the frame range of exported data can be configured. Additional export settings are available for each export file formats. Read through below pages for details on export options for each file format:
Exporting a Single Take
Step 1. Open and select a Take to export from the Data pane. The selected Take must contain reconstructed 3D data.
Step 2. Under the File tab on the command bar, click File → Export Tracking Data. This can also be done by right-clicking on a selected Take from the Data pane and clicking Export Tracking Data from the context menu.
Step 3. On the export dialogue window, select a file format and configure the corresponding export settings.
To export the entire frame range, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export a specific frame range, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 4. Click Save.
Working Range:
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range will recorded tracking data be played back and shown on the graphs. This range can also be used to output specific frame ranges when exporting tracking data from Motive.
The working range can be set from the following places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range. See: Graph View pane page.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Exporting Multiple Takes
Step 1. Under the Data pane, shift + select all the Takes that you wish to export.
Step 2. Right-click on the selected Takes and click Export Tracking Data from the context menu.
Step 3. An export dialogue window will show up for batch exporting tracking data.
Step 4. Select the desired output format and configure the corresponding export settings.
Step 5. Select frame ranges to export under the Start Frame and the End Frame settings. You can either export entire frame ranges or specified frame ranges on all of the Takes. When exporting specific ranges, desired working ranges must be set for each respective Takes.
To export entire frame ranges, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export specific frame ranges, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 6. Click Save.
Motive Batch Processor:
Exporting multiple Take files with specific options can also be done through a Motive Batch Processor script. For example, refer to FBXExporterScript.cs script found in the MotiveBatchProcessor folder.
Motive exports reconstructed 3D tracking data in various file formats and exported files can be imported into other pipelines to further utilize capture data. Available export formats include CSV, C3D, FBX, BVH, and TRC. Depending on which options are enabled, exported data may include reconstructed marker data, 6 Degrees of Freedom (6 DoF) Rigid Body data, or Skeleton data. The following chart shows what data types are available in different export formats:
CSV and C3D exports are supported in both Motive Tracker and Motive Body licenses. FBX, BVH, and TRC exports are only supported in Motive Body.
A calibration definition of a selected take can be exported from the Export Camera Calibration under the File tab. Exported calibration (CAL) files contain camera positions and orientations in 3D space, and they can be imported in different sessions to quickly load the calibration as long as the camera setup is maintained.
Read more about calibration files under the Calibration page.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
When an asset definition is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new asset in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to the File menu and select Export Assets to export all of the assets in the Live-mode or in the current TAK file(s). You can also use File → Export Profile to export other software settings including the assets.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
C3D Export: Both mocap data and the analog data will be exported onto a same C3D file. Please note that all of the analog data within the exported C3D files will be logged at the same sampling frequency. If any of the devices are captured at different rates, Motive will automatically resample all of the analog devices to match the sampling rate of the fastest device. More on C3D files: https://www.c3d.org/
CSV Export: When exporting tracking data into CSV, additional CSV files will be exported for each of the NI-DAQ devices in a Take. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. The voltage amplitude of each analog channel will be listed. Also, mocap frame rate to device sampling ratio is included since analog data is usually sampled at higher sampling rates.
Note
The coordinate system used in Motive (y-up right-handed) may be different from the convention used in the biomechanics analysis software.
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
When there is an MJPEG reference camera in a Take, its recorded video can be exported into an AVI file or into a sequence of JPEG files. The Export Video option is located under the File tab or you can also right-click on a TAK file from the Data pane and export from there. At the bottom of the export dialog, the frame rate of the exported AVI file can be set to a full-frame rate or down-sampled to half, quarter, 1/8, or 1/16 ratio framerate. You can also adjust the playback speed to export a video with a slower or faster playback speed. The captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Read more about recording reference videos on Data Recording page.
Reference Video Type: Only compressed MJPEG reference videos can be recorded and exported from Motive. Export for raw grayscale videos is not supported.
Media Player: The exported videos may not be playable on Windows Media Player, please use a more robust media player (e.g. VLC) to play the exported video files.
When a recorded capture contains audio data, an audio file can be exported through the Export Audio option that appears when right-clicking on a Take from the Data pane.
Skeletal marker labels for Skeleton assets can be exported as XML files (example shown below) from the Data pane. The XML files can be imported again to use the stored marker labels when creating new Skeletons.
For more information on Skeleton XML files, read through the Skeleton Tracking page.
Sample Skeleton Label XML File
This page covers different video modes that are available on the OptiTrack cameras. Depending on the video mode that a camera is configured to, captured frames are processed differently, and only the configured video mode will be recorded and saved in Take files.
Video types, or image-processing modes, available in OptiTrack Cameras
There are different video types, or image-processing modes, which could be used when capturing with OptiTrack cameras. Dending on the camera model, the available modes vary slightly. Each video mode processes captured frames differently at both camera hardware and software level. Furthermore, precision of the capture and required amount of CPU resources will vary depending on the configured video type.
The video types are categorized into either tracking modes (object mode and precision mode) and reference modes (MJPEG and raw grayscale). Only the cameras in the tracking modes will contribute to the reconstruction of 3D data.
Motive records frames of only the configured video types. Video types of the cameras cannot be switched for recorded Takes in post-processing of captured data.
(Tracking Mode) Object mode performs on-camera detection of centroid location, size, and roundness of the markers, and then, respective 2D object metrics are sent to the host PC. In general, this mode is best recommended for obtaining the 3D data. Compared to other processing modes, the Object mode provides smallest CPU footprint and, as a result, lowest processing latency can be achieved while maintaining the high accuracy. However, be aware that the 2D reflections are truncated into object metrics in this mode. The Object mode is beneficial for Prime Series and Flex 13 cameras when lowest latency is necessary or when the CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: Prime/PrimeX series, Flex 13, and S250e camera models.
(Tracking Mode) Precision Mode performs on-camera detection of marker reflections and their centroids. These centroid regions of interests are sent to the PC for additional processing and determination of the precise centroid location. This provides high-quality centroid locations but is very computationally expensive and is only recommended for low to moderate camera count systems for 3D tracking when the Object Mode is unavailable.
Supported Camera Models: Flex series, Tracking Bars, S250e, Slim13e, and Prime 13 series camera models.
(Reference Mode) The MJPEG -compressed grayscale mode captures grayscale frames, compressed on-camera for scalable reference video capabilities. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. The MJPEG mode can run at full frame rate and be synchronized with tracking cameras.
Supported Camera Models: All camera models
(Reference Mode) Processes full resolution, uncompressed, grayscale images. The grayscale mode is designed to be used only for reference purposes, and processed frames will not contribute to the reconstruction of 3D data. Because of the high bandwidth associated with sending raw grayscale frames, this mode is not fully synchronized with other tracking cameras and they will run at lower frame rate. Also, raw grayscale videos cannot be exported out from a recording. Use this video mode only for aiming and monitoring the camera views for diagnosing tracking problems.
Supported Camera Models: All camera models.
From Perspective View
In the perspective view, right-click on a camera from the viewport and set the camera to the desired video mode.
From Cameras View
In the cameras view, right-click on a camera view and change the video type for the selected camera.
Compared to object images that are taken by non-reference cameras in the system, grayscale videos are much bigger in data size, and recording reference video consumes more network bandwidth. High amount data traffic can increase the system latency or cause reductions in the system frame rate. For this reason, we recommend setting no more than one or two cameras to the reference mode. Also, instead of using raw grayscale video, compressed MJPEG grayscale video can be recorded to reduce the data traffic. Reference views can be observed from either the Camera Preview pane or Reference View pane.
Note:
Processing latency can be monitored from the status bar located at the bottom.
Grayscale images are used only for reference purposes, and processed frames will not contribute to reconstruction of 3D data.
Captured tracking data can be exported into a Track Row Column (TRC) file, which is a format used in various mocap applications. Exported TRC files can also be accessed from spreadsheet software (e.g. Excel). These files contain raw output data from capture, which include positional data of each labeled and unlabeled marker from a selected Take. Expected marker locations and segment orientation data are not be included in the exported files. The header contains basic information such as file name, frame rate, time, number of frames, and corresponding marker labels. Corresponding XYZ data is displayed in the remaining rows of the file.
This page provides information and instructions on how to utilize the Probe Measurement Kit.
Measurement probe tool utilizes the precise tracking of OptiTrack mocap systems and allows you to measure 3D locations within a capture volume. A probe with an attached Rigid Body is included with the purchased measurement kit. By looking at the markers on the Rigid Body, Motive calculates a precise x-y-z location of the probe tip, and it allows you to collect 3D samples in real-time with sub-millimeter accuracy. For the most precise calculation, a probe calibration process is required. Once the probe is calibrated, it can be used to sample single points or multiple samples to compute distance or the angle between sampled 3D coordinates.
Measurement kit includes:
Measurement probe
Calibration block with 4 slots, with approximately 100 mm spacing between each point.
Creating a probe using the Builder pane
Under the Type drop-down menu, select Probe. This will bring up the options for defining a Rigid Body for the measurement probe.
Select the Rigid Body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining Rigid Body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Caution
The probe tip MUST remain fitted securely in the slot on the calibration block during the calibration process.
Also, do not press in with the probe since the deformation from compressing could affect the result.
Using the Probe pane for sample collection
Place the probe tip on the point that you wish to collect.
Click Take Sample on the Measurement pane.
Collecting additional samples will provide distance and angles between collected samples.
You can also use the probe samples to reorient the coordinate axis of the capture volume. The set origin button will position the coordinate space origin at the tip of the probe. And the set orientation option will reorient the capture space by referencing to three sample points.
As the samples are collected, their coordinate data will be written out into the CSV files automatically into the OptiTrack documents folder which is located in the following directory: C:\Users\[Current User]\Documents\OptiTrack. 3D positions for all of the collected measurements and their respective RMSE error values along with distances between each consecutive sample point will be saved in this file.
Also, If needed, you can trigger Motive to export the collected sample coordinate data into a designated directory. To do this, simply click on the export option on the Probe pane.
It is heavily recommended that you use another audio capture software with timecode to capture and synchronize audio data. Audio capture in Motive is for reference only and is not intended to perfectly align to video or motion capture data.
Take scrubbing is not supported to align with audio recorded within Motive. If you would like the audio to be closely in reference to video and motion capture data, you must play the take from the beginning.
Recorded audio files can be played back from a captured Take or be exported into a WAV audio files. This page details how to record and playback audio in Motive. Before using an audio input device (microphone) in Motive, first make sure that the device is properly connected and configured in Windows.
In Motive, audio recording and playback settings can be accessed from the tab → Audio Settings.
In Motive, open the Audio Settings, and check the box next to Enable Capture.
Select the audio input device that you want to use.
Press the Test button to confirm that the input device is properly working.
Make sure the device format of the recording device matches the device format that will be used in the playback devices (speakers and headsets). This is very important as the recorded audio would not playback if these formats do not match. Most speakers have at least 2 channels, so an input device with 2 channels should be used for recording.
Capture the Take.
In Motive, open a Take that includes audio recordings.
To playback recorded audio from a Take, check the box next to Enable Playback.
Select the audio output device that you will be using.
Make sure the configurations in Device Format closely matches the Take Format. This is elaborated further in the section below.
Play the Take.
In order to playback audio recordings in Motive, audio format of recorded sounds MUST match closely with the audio format used in the output device. Specifically, communication channels and frequency of the audio must match. Otherwise, recorded sound will not be played back.
The recorded audio format is determined by the format of a recording device that was used when capturing Takes. However, audio formats in the input and output devices may not always agree. In this case, you will need to adjust the input device properties to match the take.
Device's audio format can be configured under the Sound settings in Windows. In Sound settings (accessed from Control Panel), select the recording device, click on Properties, and the default format can be changed under the Advanced Tab, as shown in the image below.
There are a variety of different programs and hardware that specialize in audio capture. A not very exhaustive list of examples can be seen below.
Tentacle Sync TRACK E
Adobe Premiere
Avid Media Composer
Etc...
In order to capture audio using a different program, you will need to connect both the motion capture system (through the eSync) and the audio capture device to timecode data (and possibly genlock data). You can then use the timecode information to synchronize the two sources of data for your end product.
The following devices are internally tested and should work for most use cases for reference audio only:
AT2020 USB
MixPre-3 II Digital USB Preamp
Hotkeys can be viewed and customized from the panel. The below chart lists only the commonly used hotkeys. There are also other hotkeys and unassigned hotkeys, which are not included in the chart below. For a complete list of hotkey assignments, please check the in Motive.
Fuction | Default Hotkey |
---|
The Data Streaming settings can be found by selecting the by selecting View > Data Streaming Pane.
Select the network interface address for streaming data.
Select desired data types to stream under streaming options.
When streaming skeletons, set the appropriate bone naming convention for client application.
Check Broadcast Frame Data at the top.
Configure streaming settings and designate the corresponding IP address from client applications
Stream live or playback captures
Firewall or anti-virus software can block network traffic, so it is important to make sure these applications are disabled or configured to allow access to both server (Motive) and Client applications.
Before starting to broadcast data onto the selected network interface, define which data types to stream. Under streaming options, there are settings where you can include or exclude specific data types and syntax. Set only the necessary criteria to true. For most applications, the default settings will be appropriate.
When streaming skeleton data, bone naming convention formats annotations for each segment when data is streamed out. Appropriate convention should be configured to allow client application to properly recognize segments. For example, when streaming to Autodesk pipelines, the naming convention should be set to FBX.
NatNet is a client/server networking protocol which allows sending and receiving data across a network in real-time. It utilizes UDP along with either Unicast or Multicast communication for integrating and streaming reconstructed 3D data, rigid body data, and skeleton data from OptiTrack systems to client applications. Within the API, a class for communicating with OptiTrack server applications is included for building client protocols. Using the tools provided in the NatNet API, capture data can be used in various application platforms. Please refer to the NatNet User Guide For more information on using NatNet and its API references.
Rotation conventions
NatNet streams rotational data in quaternions. If you wish to present the rotational data in the Euler convention (pitch-yaw-roll), the quaternions data need to be converted into Euler angles. In the provided NatNet SDK samples, the SampleClient3D application converts quaternion rotations into Euler rotations to display in the application interface. The sample algorithms for the conversion are scripted in the NATUtils.cpp file. Refer to the NATUtils.cpp file and the SampleClient3D.cpp file to find out how to convert quaternions into Euler conventions.
XML Triggering Port: Command Port (Advanced Network Settings) + 2. This defaults to 1512 (1510 + 2).Tip: Within the NatNet SDK sample package, there is are simple applications (BroadcastSample.cpp (C++) and NatCap (C#)) that demonstrates a sample use of XML remote trigger in Motive.
XML syntax for the start / stop trigger packet
Capture Start Packet
Capture Stop Packet
Highlight, or select, the desired frame range in the Graph pane, and zoom into it using the zoom-to-fit hotkey (F) or the icon.
Set the working range from the Control Deck by inputting start and end frames on the field.
Row | Description |
---|---|
Options | Description |
---|---|
Options | Descriptions |
---|---|
Tracking Data Type | CSV | C3D | FBX | BVH | TRC |
---|---|---|---|---|---|
To switch between video types, simply right-click on one of the cameras from the pane and select the desired image processing mode under the video types.
You can check and/or switch video types of a selected camera from either the , . Also, you toggle the camera(s) between tracking mode and reference mode in the by clicking on the Mode button ( / ). If you want to use all of the cameras for tracking, make sure all of the cameras are in the Tracking mode.
Open the and and select one or more cameras listed. Once the selection is made, respective camera properties will be shown on the properties pane. Current video type will be shown in the Video Mode section and you can change it using the drop-down menu.
Cameras can also be set to record grayscale reference videos during capture. When using MJPEG mode, these videos are synchronized with other captured frames, and they are used to observe what goes on during recorded capture. To record the reference video, switch the camera into a MJPEG grayscale mode by toggling on the camera mode.
The Reference View pane can be accessed under the View tab → Reference Overlay or simply by clicking on one of the reference view icons from the main toolbar (). This pane is used specifically for monitoring reference images from either a live capture or a recorded capture. When reference cameras are viewed in this pane, captured assets are overlayed over the video, which is very useful for analyzing the events during the capture.
This section provides detailed steps on how to create and use the measurement probe. Please make sure the camera volume has been successfully before creating the probe. System calibration is important on the accuracy of marker tracking, and it will directly affect the probe measurements.
Open the under and click Rigid Bodies.
Bring the probe out into the tracking volume and create a from the markers.
Once the probe is calibrated successfully, a probe asset will be displayed over the Rigid Body in Motive, and live x/y/z position data will be displayed under the .
Under the Tools tab, open the .
A Virtual Reference point is constructed at the location and the coordinates of the point are displayed in the . The points location can be as a .CSV file.
The location of the probe tip can also be streamed into another application in real-time. You can do this by the probe Rigid Body position via . Once calibrated, the pivot point of the Rigid Body gets positioned precisely at the tip of the probe. The location of a pivot point is represented by the corresponding Rigid Body x-y-z position, and it can be referenced to find out where the probe tip is located.
Audio capture within Motive, does not natively synchronize to video or motion capture data and is intended for reference audio only. If you require synchronization, please use an external device and software with timecode. See below for suggestions for .
Recorded audio files can be exported into WAV format. To export, right-click on a Take from the and select Export Audio option in the context menu.
For more information on synchronizing external devices, read through the page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Streaming plugins are available for Autodesk Motion Builder, The MotionMonitor, Visual3D, Unreal Engine 4, 3ds Max, Maya (VCS), VRPN, and trackd, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom clients to receive capture data. All of the listed streaming options do not require separate licenses to use. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. This page covers configuring Motive to broadcast frame data over a selected server network. Detailed instructions on specific are included in the PDF documentation that ships with the respective plugins or SDK's.
Read through the page for explanations on each setting. NaturalPoint Data Streaming Forum: .
Open the in Motive
It is important to select the network adapter (interface, IP Address) for streaming data. Most Motive Host PCs will have multiple network adapters - one for the camera network and one (or more) for the local area network (LAN). Motive will only stream over the selected adapter (interface). Select the desired interface using the in Motive. The interface can be either over a local area network (LAN) or on the same machine (localhost, local loopback). If both server (Motive) and client application are running on the same machine, set the network interface to the local loopback address (127.0.0.1). When streaming over a LAN, select the IP address of the network adapter connected to the LAN. This will be the same address the Client application will use to connect to Motive.
See:
Motive (1.7+) uses a right-handed Y-up coordinate system. However, coordinate systems used in client applications may not always agree with the convention used in Motive. In this case, the coordinate system in streamed data needs to be modified to a compatible convention. For client applications with a different ground plane definition, Up Axis can be changed under Advanced Network Settings. For compatibility with left-handed coordinate systems, the simplest method is to rotate the capture volume 180 degrees on the Y axis when defining the ground plane during .
If desired, recording in Motive can control or be controlled by other remote applications via sending or receiving either or XML broadcast messages to or from a client application through the UDP communication protocol. This enables client applications to trigger Motive or vise versa. Using commands is recommended because they are not only more robust but they also offer additional control features.
Recording start and stop commands can also be transmitted via XML packets. When triggering via XML messages, the Remote Trigger setting under must be set to true. In order for Motive, or clients, to receive the packets, the XML messages must be sent via the triggering UDP port. The triggering port is designated as two increments (2+) of the defined Command Port (default: 1510), under the advanced network settings, which defaults to 1512. Lastly, the XML messages must exactly follow the appropriate syntax:
Value | Description |
---|
Value | Description |
---|
Protocol | Markers | Rigid Bodies | Skeletons | Description | Download |
---|
CS-100: Used to define a ground plane in a small, precise motion capture volumes.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 11.5 mm
Marker size: 9.5 mm (diameter)
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
Use Zero Based Frame Index
C3D specification defines first frame as index 1. Some applications import C3D files with first frame starting at index 0. Setting this option to true will add a start frame parameter with value zero in the data header.
Export Unlabeled Markers
Includes unlabeled marker data in the exported C3D file. When set to False, the file will contain data for only labeled markers.
Export Finger Tip Markers
Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking (e.g. Baseline + 11 Additional Markers + Fingers (54))
Use Timecode
Includes timecode.
Rename Unlabeled As _000X
Unlabeled markers will have incrementing labels with numbers _000#.
Marker Name Syntax
Choose whether the marker naming syntax uses ":" or "_" as the name separator. The name separator will be used to separate the asset name and the corresponding marker name in the exported data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel or MarkerLabel).
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Markers
Enabling this option includes X/Y/Z reconstructed 3D positions for each marker in exported CSV files.
Unlabeled Markers
Enabling this option includes tracking data of all of the unlabeled makers to the exported CSV file along with other labeled markers. If you just want to view the labeled marker data, you can turn off this export setting.
Quality Statistics
Adds a column of Mean Marker Error values for each rigid body position data.
Adds a column of Marker quality values after each rigid body marker data.
More details are provided in the below section.
Rigid Bodies
When this option is set to true, exported CSV file will contain 6 Degree of Freedom (6 DoF) data for each rigid body from the Take. 6 DoF data contain orientations (pitch, roll, and yaw in the chosen rotation type as well as 3D positions (x, y, z) of the rigid body center.
RigidBodyMarkers
Enabling this option includes 3D position data for each expected marker locations (not actual marker location) of rigid body assets. Compared to the positions of the reconstructed marker positions included within the Markers columns, the Rigid Body Markers show the solved positions of the markers as affected by the rigid body tracking but not affected by occlusions.
Bones
When this option is set to true, exported CSV files will include 6 DoF data for each bone segment of skeletons in exported Takes. 6 DoF data contain orientations (pitch, roll, and yaw) in the chosen rotation type, and also 3D positions (x,y,z) for the proximal joint center of the bone which is the pivot point of the bone.
BoneMarkers
Enabling this option will include 3D position data for each expected marker locations (not actual marker location) of bone segments in skeleton assets. Compared to the real marker positions included within the Markers column, the Bone Markers show the solved positions of the markers as affected by the skeleton tracking but not affected by occlusions.
Header information
Includes detailed information about capture data as a header in exported CSV files. Types of information included in the header section is listed in the following section.
Rotation Type
Rotation type determines whether Quaternion or Euler Angles is used for orientation convention in exported CSV files. For Euler rotation, right-handed coordinate system is used and all different orders (XYZ, XZY, YXZ, YZX, ZXY, ZYX) of elemental rotation is available. More specifically, the XYZ order indicates pitch is degree about the X axis, yaw is degree about the Y axis, and roll is degree about the Z axis.
Unit
Sets units for positional data in exported CSV files
Export Device Data
When set to True, separate CSV files for recorded device data will be exported. This includes force plate data and analog data from NI-DAQ devices. A CSV file will be exported for each device included in the Take.
Use World Coordinates
This option decides whether exported data will be based on world(global) or local coordinate systems.
Global: Defines the position and orientation in respect to the global coordinate system of the calibrated capture volume. The global coordinate system is the origin of the ground plane which was set with a calibration square during Calibration process.
Local: Defines the bone segment position and orientation in respect to the coordinate system of the parent segment. Note that the hip of the skeleton is always the top-most parent of the segment hierarchy. Local coordinate axes can be set to visible from Application Settings or display properties of assets in Data Pane. The Bone segment rotation values in the Local coordinate space can be used to roughly represent the joint angles; however, for precise analysis, joint angles should be computed through a biomechanical analysis software using the exported capture data (C3D).
1st row
General information about the Take and export settings. Included information are: format version of the CSV export, name of the TAK file, the captured frame rate, the export frame rate, capture start time, number of total frames, rotation type, length units, and coordinate space type.
2nd row
Empty
3rd row
Displays which data type is listed in each corresponding column. Data types include raw marker, Rigid Body, Rigid Body marker, bone, bone marker, or unlabeled marker. Read more about Marker Types.
4th row
Includes marker or asset labels for each corresponding data set.
5th row
Displays marker ID.
6th and 7th row
Includes header label on which data is included in the column: position and orientation on X/Y/Z.
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Set the unit in exported files.
Use Timecode
Includes timecode.
Export FBX Actors
Includes FBX Actors in the exported file. Actor is a type of asset used in animation applications (e.g. MotionBuilder) to display imported motions and connect to a character. In order to animate exported actors, associated markers will need to be exported as well.
Optical Marker Name Space
Overrides the default name spaces for the optical markers.
Marker Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Markers
Export each marker coordinates.
Unlabeled Markers
Includes unlabeled markers.
Calculated Marker Positions
Export asset's constraint marker positions as the optical marker data.
Interpolated Fingertips
Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking.
Marker Nulls
Exports locations of each marker.
Export Skeleton Nulls
Can only be exported when solved data is recorded for exported Skeleton assets. Exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Rigid Body Nulls
Can only be exported when solved data is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Sets the unit for exported segment lengths.
Use Timecode
Includes timecode.
Export Skeletons
Export Skeleton nulls. Please note that the solved data must be recorded for Skeleton bone tracking data to be exported. It exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Skeleton Names
Names of Skeletons that will be exported into the FBX binary file.
Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Rigid Body Nulls
Can only be exported when solved data is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Rigid Body Names
Names of the Rigid Bodies to export into the FBX binary file as 6 DoF nulls.
Marker Nulls
Exports locations of each marker.
Reconstructed 3D Marker Data
•
•
•
•
6 Degrees of Freedom Rigid Body Data
•
•
•
Skeleton Data
•
•
•
File |
Open File (TTP, CAL, TAK, TRA, SKL) | CTRL + O |
Save Current Take | CTRL + S |
Save Current Take As | CTRL + Shift + S |
Export Tracking Data from current (or selected) TAKs | CTRL + Shift + Alt + S |
Basic |
Toggle Between Live/Edit Mode | ~ |
Record Start / Playback start | Space Bar |
Select All | CTRL + A |
Undo | Ctrl + Z |
Redo | Ctrl + Y |
Cut | Ctrl + X |
Paste | Ctrl + V |
Layout |
Calibrate Layout | Ctrl+1 |
Create Layout | Ctrl+2 |
Capture Layout | Ctrl+3 |
Edit Layout | Ctrl+4 |
Custom Layout [1...] | Ctrl+[5...9], Shift[1...9] |
Perspective View Pane (3D) |
Follow Selected | G |
Zoom to Fit Selection | F |
Zoom to Fit All | Shift + F |
Reset Tracking | Crtl+R |
" |
Shift + " |
Jog Timeline | Alt + Left Click |
Create Rigid Body From Selected | Ctrl+T |
Refresh Skeleton Asset | Ctrl + R with a skeleton asset selected |
T |
Toggle Labeling Mode | D |
Select Mode | Q |
Translation Mode | W |
Rotation Mode | E |
Scale Mode | R |
Camera Preview (2D) |
Video Modes |
|
Data Management Pane |
Remove or Delete Session Folders | Delete |
Remove Selected Take | Delete |
paste shots as empty take from clipboard | Ctrl+V |
Timeline / Graph View |
Toggle Live/Edit Mode | ~ |
Again+ | + |
Live Mode: Record | Space |
Edit Mode: Start/stop playback | Space |
Rewind (Jump to the first frame) | Ctrl + Shift + Left Arrow |
PageTimeBackward (Ten Frames) | Down Arrow |
StepTimeBackward (One Frame) | Left Arrow |
StepTimeForward (One Frame) | Right Arrow |
PageTimeForward (Ten Frames) | Up Arrow |
FastForward (Jump to the last frame) | Ctrl + Shift + Right Arrow |
To next gapped frames | Z |
To previous gapped frames | Shift + Z |
Graph View - Delete Selected Keys in 3D data | Delete when frame range is selected |
Show All | Shift + F |
Frame To Selected | F |
Zoom to Fit All | Shift + F |
Editing / Labeling Workflow |
Apply smoothing to selected trajectory | X |
Apply cubic fit to the gapped trajectory | C |
Toggle Labeling Mode | D |
To next gapped frame | Z |
To previous gapped frame | Shift + Z |
Enable/Disable Asset Editing | T |
Select Mode | Q |
Translation Mode | W |
Rotation Mode | E |
Scale Mode | R |
Delete selected key | DELETE |
Name | Name of the Take that will be recorded. |
SessionName | Name of the session folder. |
Notes | Informational note for describing the recorded Take. |
Description | (Reserved) |
Assets |
DatabasePath | The file directory where the recorded captures will be saved. |
Start Timecode |
PacketID | (Reserved) |
HostName | (Reserved) |
ProcessID | (Reserved) |
Name | Name of the recorded Take. |
Notes | Informational notes for describing recorded a Take. |
Assets |
Timecode |
HostName | (Reserved) |
ProcessID | (Reserved) |
NatNet SDK | Y | Y | Y | Runs local or over network. The NatNet SDK includes multiple sample applications for C/C++, OpenGL, Winforms/.NET/C#, MATLAB, and Unity. It also includes a C/C++ sample showing how to decode Motive UDP packets directly without the use of client libraries (for cross platform clients such as Linux). C/C++ or VB/C#/.NET or Matlab |
Autodesk MotionBuilder Plugin | Y | Y | Y | Runs local or over network. Allows streaming both recorded data and real-time capture data for markers, rigid bodies, and skeletons. Comes with Motion Builder Resources: OptiTrack Optical Device OptiTrack Skeleton Device OptiTrack Insight VCS |
Visual3D | Y | N | N | With a Visual3D license, you can download Visual3D server application which is used to connect OptiTrack server to Visual3D application. Using the plugin, Visual 3D receives streamed marker data to solve precise skeleton models for biomechanics applications. |
The MotionMonitor | Y | N | N | The MotionMonitor is cable of receiving live streamed motion capture data from Motive. Streamed data is then solved, in real-time, using live marker data. |
Unreal Engine 4 Plugin | N | Y | N |
Unity Plugin | N | Y | N |
3ds Max Plugin | N | Y | N | (Unmaintained)Runs local or over network. Supports 3ds Max 2009-2012. This plugin allows Autodesk 3ds Max to receive skeletons and rigid bodies from the OptiTrack server application such as Motive. |
VCS:Maya | N | Y | N | Separate license is required. Streams capture data into Autodesk Maya for using the Virtual Camera System. |
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
Single Joint Torso
When this is set to true, there will be only one skeleton segment for the torso. When set to false, there will be extra joints on the torso, above the hip segment.
Hands Downward
Sets the exported skeleton base pose to use hands facing downward.
MotionBuilder Names
Sets the name of each skeletal segment according to the bone naming convention used in MotionBuilder.
Skeleton Names
Set this to the name of the skeleton to be exported.
In Motive, the Application Settings can be accessed under the View tab or by clicking icon on the main toolbar.
Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main Toolbar.
The Application Settings consists of several tabs, all of which we'll go into further details on their respective pages. Below is the list of the Application Settings tabs in Motive 2.3.
Cameras
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left
If you wish to change the location and orientation of the global axis, you can use the Coordinate Systems Tool which can be found under the Tools tab.
When using the Duo/Trio tracking bars, you can set the coordinate origin at desired location and orientation using a calibration square. Make sure the calibration square is oriented properly.
Adjusting the Coordinate System Steps
First set place the calibration square at the desired origin.
[Motive] Open the Coordinate System Tools pane under the Tools tab.
[Motive] Select the Calibration square markers from the Perspective View pane
[Motive] Click the Set Ground Plane button from the Coordinate System Tools pane, and the global origin will be adjusted.
The API reports "world-space" values for markers and rigid body objects at each frame. It is often desirable to convert the coordinates of points reported by the API from the world-space (or global) coordinates into the local space of the rigid body. This is useful, for example, if you have a rigid body that defines the world space that you want to track markers within.
Rotation values are reported as both quaternions, and as roll, pitch, and yaw angles (in degrees). Quaternions are a four-dimensional rotation representation that provide greater mathematical robustness by avoiding "gimbal" points that may be encountered when using roll, pitch, and yaw (also known as Euler angles). However, quaternions are also more mathematically complex and are more difficult to visualize, which is why many still prefer to use Euler angles.
There are many potential combinations of Euler angles so it is important to understand the order in which rotations are applied, the handedness of the coordinate system, and the axis (positive or negative) that each rotation is applied about.
These are the conventions used in the API for Euler angles:
Rotation order: XYZ
All coordinates are *right-handed*
Pitch is degrees about the X axis
Yaw is degrees about the Y axis
Roll is degrees about the Z axis
Position values are in millimeters
To create a transform matrix that converts from world coordinates into the local coordinate system of your chosen rigid body, you will first want to compose the local-to-world transform matrix of the rigid body, then invert it to create a world-to-local transform matrix.
To compose the rigid body local-to-world transform matrix from values reported by the API, you can first compose a rotation matrix from the quaternion rotation value or from the yaw, pitch, and roll angles, then inject the rigid body translation values. Transform matrices can be defined as either "column-major" or "row-major". In a column-major transform matrix, the translation values appear in the right-most column of the 4x4 transform matrix. For purposes of this article, column-major transform matrices will be used. It is beyond the scope of this article, but it is just as feasible to use row-major matrices by transposing matrices.
In general, given a world transform matrix of the form: M = [ [ ] Tx ] [ [ R ] Ty ] [ [ ] Tz ] [ 0 0 0 1 ]
where Tx, Tz, Tz are the world-space position of the origin (of the rigid body, as reported from the API), and R is a 3x3 rotation matrix composed as: R = [ Rx (Pitch) ] * [ Ry (Yaw) ] * [ Rz (Roll) ]
where Rx, Ry, and Rz are 3x3 rotation matrices composed according to:
A handy trick to know about local-to-world transform matrices is that once the matrix is composed, it can be validated by examining each column in the matrix. The first three rows of Column 1 are the (normalized) XYZ direction vector of the world-space X axis, column 2 holds the Y axis, and column 3 is the Z axis. Column 4, as noted previously, is the location of the world-space origin. To convert a point from world coordinates (coordinates reported by the API for a 3D point anywhere in space), you need a matrix that converts from world space to local space. We have a local-to-world matrix (where the local coordinates are defined as the coordinate system of the rigid body used to compose the transform matrix), so inverting that matrix will yield a world-to-local transformation matrix. Inversion of a general 4x4 matrix can be slightly complex and may result in singularities, however we are dealing with a special transform matrix that only contains rotations and a translation. Because of that, we can take advantage of the method shown here to easily invert the matrix:
http://stackoverflow.com/questions/2624422/efficient-4x4-matrix-inverse-affine-transform
Once the world matrix is converted, multiplying it by the coordinates of a world-space point will yield a point in the local space of the rigid body. Any number of points can be multiplied by this inverted matrix to transform them from world (API) coordinates to local (rigid body) coordinates.
The API includes a sample (markers.sln/markers.cpp) that demonstrates this exact usage.
This page provides an explanation on some of the settings that affect how the 3D tracking data is obtained. Most of the related settings can be found under the Live Pipeline tab in the Application settings. A basic understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. With that being said, we do not recommend changing these settings as the default settings should work well for most tracking applications.
THR setting under camera properties
Reconstruction is a process of deriving 3D points from 2D coordinates obtained by captured camera images. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated on each captured frame and processed through the solver pipeline in order to be tracked. This process involves trajectorization of detected 3D markers within the calibrated capture volume and the booting process for the tracking of defined assets.
For real-time tracking in Live mode, the settings for this pipeline can be configured from the Live-Pipeline tab in the Application Settings. For post-processing recorded files in Edit mode, the solver settings can be accessed under corresponding Take properties. Note that optimal configurations may vary depending on capture applications and environmental conditions, but for most common applications, default settings should work well.
In this page, we will focus on the Live Pipeline settings and the Camera Settings, which are the key settings that have direct effects on the reconstruction outcome.
Camera settings can be configured under the Devices pane. In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. For this reason, the camera lens must be focused on the tracking volume, and the settings should be configured so that the markers are clearly visible in each camera view. Thus, the camera settings, such as camera exposure and IR intensity values, must always be checked and optimized in each setup. The following sections highlight additional settings that are directly related to 3D reconstruction.
Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See Camera Video Types page for more information.
To oscillate between camera video types in Motive, click the camera video type icon under Mode in the Devices pane.
The THR setting is located in the camera properties in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.
We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data.
To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the Application Settings.
The Live Pipeline settings under application settings control the tracking quality in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main stages before getting reconstructed into 3D tracking. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.
When a frame of image is captured by a camera, the 2D camera filter is applied. This filter works by judging on the sizes and shapes of the detected reflections or IR illuminations, and it determines which ones can be accepted as markers. Please note that the camera filter settings can be configured in Live mode only because this filter is applied at the hardware level when the 2D frames are first captured. Thus, you will not be able to modify these settings on a recorded Take as the 2D data has already been filtered and saved; however, when needed, you can increase the threshold on the filtered 2D data and perform post-processing reconstruction to recalculate 3D data from the 2D data.
Min/Max Thresholded Pixels
The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.
For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.
Circularity
In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the Marker Circularity settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.
Object mode vs. Precision Mode
The Object Mode and Precision Mode deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In precision mode, cameras send the pixel data that would have been used by object mode to Motive for processing. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about Video Types.
After the 2D camera filter has been applied, each of the 2D centroids captured by each camera forms a marker ray, which is basically a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume; from each calibrated camera. When a minimum required number of rays, as defined in the Minimum Rays) converge and intersect within the allowable maximum offset distance (defined by 3D Threshold settings) trajectorization of a 3D marker occurs. Trajectorization is a process of using 2D data to calculate respective 3D marker trajectories in Motive.
Tracked Ray (Green)
Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.
Untracked Ray (Red)
An untracked ray is a marker ray that fails to contribute to a reconstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.
Motive processes markers rays with the camera calibration to reconstruct respective markers, and the solver settings determine how 2D data gets trajectories and solved into 3D data for tracking the Rigid Bodies and/or Skeletons. The solver not only tracks from the marker rays but additionally utilizes pre-defined asset definitions to provide high-quality tracking. The default solver settings work for most tracking applications, and the users should not need to modify these settings. With that being said, some of the basic settings which can be modified are summarized below.
Minimum Rays to Start / Minimum Rays to Continue
This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.
More Settings
The Live Pipeline settings doesn't have to be modified for most tracking applications. There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the Application Settings: Live Pipeline page or refer to the corresponding tooltips.
Motive performs real-time reconstruction of 3D coordinates directly from either captured or recorded 2D data. When Motive is live-processing the data, you can examine the marker rays from the viewport, inspect the Live-Pipeline settings, and optimize the 3D data acquisition.
There are two modes where Motive is reconstructing 3D data in real-time:
Live mode (Live 2D data capture)
2D mode (Recorded 2D data)
In the Live Mode, Motive is Live processing the data from captured 2D frames to obtain 3D tracking data in real-time, and you can inspect and monitor the marker rays from the 3D viewport. Any changes to the Live Pipeline (Solver/Camera) settings under the Application Settings will be reflected immediately in the Live mode.
The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When a capture is recorded in Motive, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the 3D data gets loaded first when a recorded Take file is opened.
Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once recording has been made. You can still, however, view and use the recorded 2D data to optimize the solver parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the Data pane.
In 2D Mode, Motive is reconstructing in real-time from recorded 2D data; using the reconstruction/solver settings that were configured in the Application Settings at the time of recording; Settings are saved under the properties of the corresponding TAK file. Please note that reconstruction/solver settings from the TAK properties get applied for post-processing, instead of the settings from the application settings panel. When in 2D Mode while editing a TAK file, any changes to the reconstruction/solver settings under TAK properties will be reflected in how the 3D reconstructions are solved, in real-time.
Switching to 2D Mode
Applying changes to 3D data
Once the reconstruction/solver settings have been adjusted and optimized on recorded data, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.
The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.
Performing post-processing reconstruction. To perform post-processing reconstruction, open the Data pane, select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.
Camera Filter Settings In Edit mode, 2D camera filters can still be modified from the tracking group properties in the Devices pane. Modified filter settings will change which markers in the recorded 2D data gets processed through the Live Pipeline engine.
Solver/Reconstruction Settings When you perform post-processing reconstruction on a recorded Take(s), a new set of 3D data will be reconstructed from the filtered 2D camera data. In this step, the solver settings defined under corresponding Take properties in the Properties pane will be used. Note that the reconstruction properties under the Application Settings are for the Live capture systems only.
Reconstruct and Auto-label, will additionally apply the auto-labeling pipeline on the obtained 3D data and label any markers that associate with existing asset (Rigid Body or Skeleton) definitions. The auto-labeling pipeline will be explained more on the Labeling page.
Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the Control Deck or in the Graph pane. When nothing is selected, reconstruction will be applied to all frames.
Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the Data pane.
Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.
Also, for Takes involving Skeleton assets, if the Skeletons are never in well-trackable poses throughout the captured Take, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
The Motive Batch Processor is a separate stand-alone Windows application, built on the new NMotive scripting and programming API, that can be utilized to process a set of Motive Take files via IronPython or C# scripts. While the Batch Processor includes some example script files, it is primarily designed to utilize user-authored scripts.
Initial functionality includes scripting access to file I/O, reconstructions, high-level Take processing using many of Motive's existing editing tools, and data export. Upcoming versions will provide access to track, channel, and frame-level information, for creating cleanup and labeling tools based on individual marker reconstruction data.
Motive Batch Processor Scripts make use of the NMotive .NET class library, and you can also utilize the NMotive classes to write .NET programs and IronPython scripts that run outside of this application. The NMotive assembly is installed in the Global Assembly Cache and also located in the assemblies
sub-directory of the Motive install directory. For example, the default location for the assembly included in the 64-bit Motive installer is:
C:\Program Files\OptiTrack\Motive\assemblies\x64
The full source code for the Motive Batch Processor is also installed with Motive, at:
C:\Program Files\OptiTrack\Motive\MotiveBatchProcessor\src
You are welcome to use the source code as a starting point to build your own applications on the NMotive framework.
Requirements
A batch processor script using the NMotive API. (C# or IronPython)
Take files that will be processed.
Steps
Launch the Motive Batch Processor. It can be launched from either the start menu, Motive install directory, or from the Data pane in Motive.
First, select and load a Batch Processor Script. Sample scripts for various pipelines can be found in the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder.
Load the captured Takes (TAK) that will be processed using the imported scripts.
Click Process Takes to batch process the Take files.
Reconstruction Pipeline
When running the reconstruction pipeline in the batch processor, the reconstruction settings must be loaded using the ImportMotiveProfile method. From Motive, export out the user profile and make sure it includes the reconstruction settings. Then, import this user profile file into the Batch Processor script before running the reconstruction, or trajectorizer, pipeline so that proper settings can be used for reconstructing the 3D data. For more information, refer to the sample scripts located in the TakeManipulation folder.
A class reference in Microsoft compiled HTML (.chm) format can be found in the Help
sub-directory of the Motive install directory. The default location for the help file (in the 64-bit Motive installer) is:
C:\Program Files\OptiTrack\Motive\Help\NMotiveAPI.chm
The Motive Batch Processor can run C# and IronPython scripts. Below is an overview of the C# script format, as well as an example script.
A valid Batch Processor C# script file must contain a single class implementing the ItakeProcessingScript
interface. This interface defines a single function:
Result ProcessTake( Take t, ProgressIndicator progress )
.
Result, Take, and ProgressIndicator
are all classes defined in the NMotive
namespace. The Take object t
is an instance of the NMotive Take
class. It is the take being processed. The progress
object is an instance of the NMotive ProgressIndicator
and allows the script to update the Batch Processor UI with progress and messages. The general format of a Batch Processor C# script is:
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are multiple C# (.cs) sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your C# script file must have a '.cs' extension.
Included sample script pipelines:
ExporterScript - BVH, C3D, CSV, FBXAscii, FBXBinary, TRC
TakeManipulation - AddMarker, DisableAssets, GapFill, MarkerFilterSCript, ReconstructAutoLabel, RemoveUnlabeledMarkers, RenameAsset
IronPython is an implementation of the Python programming language that can use the .NET libraries and Python libraries. The batch processor can execute valid IronPython scripts in addition to C# scripts.
Your IronPython script file must import the clr module and reference the NMotive assembly. In addition, it must contain the following function:
def ProcessTake(Take t, ProgressIndicator progress)
The following illustrates a typical IronPython script format.
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your IronPython script file must have a '.cs' extension.
A list of the default rigid body creation properties is listed under the Rigid Bodies tab. Thes properties are applied to only rigid bodies that are newly created after the properties have been modified. For descriptions of the rigid body properties, please read through the page.
Note that this is the default creation properties. Asset specific rigid body properties are modified directly from the .
A list of the default Skeleton display properties for newly created skeletons is listed under the Skeletons tab. These properties are applied to only skeleton assets that are newly created after the properties have been modified. For descriptions of the skeleton properties, please read through the Properties: Skeleton page.
Note that this is the default creation properties. Asset-specific skeleton properties are modified directly from the .
Skeleton Creation Pose
Chooses which Skeleton calibration pose to be used for creation. (T-pose, A-pose Palms Downward, A-pose Palms Forward, and A-pose Elbows Bent)
Head Upright
Creates the skeleton with heads upright irrespective of head marker locations.
Straight Arms
Creates the skeleton with arms straight even when arm markers are not straight.
Straight Legs
Creates the skeleton with straight knee joints even when leg markers are not straight.
Feet On Floor
Creates the skeleton with feet planted on the ground level.
Height Marker
Force the solver so that the height of the created skeleton aligns with the top head marker.
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar.
Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
Reconstruction tab contains a list of parameters for the real-time Point Cloud reconstruction engine.
Take Suffix Format String
Sets the separator (_) and string format specifiers (%03d) for the suffix added after existing file names.
Numeric LEDs
Enable or disable the LED panel in front of cameras that displays assigned camera numbers.
Auto Archive Takes
Save Data Folder
Restore Calibration
Automatically loads the previous, or last saved, calibration setting when starting Motive.
Camera ID
Sets how Camera IDs are assigned for each camera in a setup. Available options are By Location and By Serial Number. When assigning by location, camera IDs will be given following the positional order in clockwise direction, starting from the -X and -Z quadrant in respect to the origin.
Device Profile
Sets the default device profile, XML format, to load onto Motive. The device profile determines and configures the settings for peripheral devices such as force plates, NI-DAQ, or navigation controllers.
Switch to MJPEG
Configures the Aim Assist button. Sets whether the button will switch the camera to MJPEG mode and back to the default camera group record mode. Valid options are: True (default) and False.
Aiming Crosshairs
Sets whether the camera button will display the aiming crosshairs on the MJPEG view of the camera. Valid options are True (default), False.
Aiming Button LED
Enables or disables LED illumination on the Aim Assist button behind Prime Series cameras.
Live Color
(Default: Blue) Sets the indicator ring color for cameras in Live mode. Default: Blue
Recording Color
(Default: Red) Sets the indicator ring color for cameras when recording a capture.
Playback Color
(Default: Black) Sets the indicator ring color for cameras when Motive is in playback mode.
Selection Color
(Default: Yellow) Sets the indicator ring color for cameras that are selected in Motive.
Scene Camera
(Default: Orange) Sets the indicator ring color for cameras that are set as the reference camera in Motive.
LLDP (PoE+) Detection
Enables detection of PoE+ switches by High Power cameras (Prime 17W and Prime 41). LLDP allows the cameras to communicate directly with the switch and determine power availability to increase output to the IR LED rings. When using Ethernet switches that are not PoE+ Enabled or switches that are not LLDP enabled, cameras will not go into the high power mode even with this setting enabled.
Strobe On During Playback
Keeps the camera IR strobe on at all times, even during the playback mode.
The Assets tab in the application settings panel is where you can configure the creation properties for Rigid Body and Skeleton assets. In other words, all of the settings configured in this tab will be assigned to the Rigid Body and Skeleton that are newly created in Motive.
You can change the naming convention of Rigid Bodies when they are first created. For instance, if it is set to RigidBody, the first Rigid Body will be named RigidBody when first created. Any subsequent Rigid Bodies will be named RigidBody 001, RigidBody 002, and so on.
User definable ID. When streaming tracking data, this ID can be used as a reference to specific Rigid Body assets.
The minimum number of markers that must be labeled in order for the respective asset to be booted.
The minimum number of markers that must be labeled in order for the respective asset to be tracked.
Applies double exponential smoothing to translation and rotation. Disabled at 0.
Compensate for system latency by predicting movement into the future.
For this feature to work best, smoothing needs to be applied as well.
Toggle 'On' to enable. Displays asset's name over the corresponding skeleton in the 3D viewport.
Select the default color a Rigid Body will have upon creation. Select 'Rainbow' to cycle through a different color each time a new Rigid Body is created.
When enabled this shows a visual trail behind a Rigid Body's pivot point. You can change the History Length, which will determine how long the trail persists before retracting.
Shows a Rigid Body's visual overlay. This is by default Enabled. If disabled, the Rigid Body will only appear as individual markers with the Rigid Body's color and pivot marker.
When enabled for Rigid Bodies, this will display the Rigid Body's pivot point.
Shows the transparent sphere that represents where an asset first searches for markers, i.e. the asset model marker.
When enabled and a valid geometric model is loaded, the model will draw instead of the Rigid Body.
Allows the asset to deform more or less to accommodate markers that don't fix the model. High values will allow assets to fit onto markers that don't match the model as well.
Creates the Skeleton with arms straight even when arm markers are not straight.
Creates the Skeleton with straight knee joints even when leg markers are not straight.
Creates the Skeleton with feet planted on the ground level.
Creates the Skeleton with heads upright irrespective of head marker locations.
Force the solver so that the height of the created Skeleton aligns with the top head marker.
Height offset applied to hands to account for markers placed above the write and knuckle joints.
Same as the Rigid Body visuals above:
Label
Creation Color
Bones
Asset Model Markers
Changes the color of the skeleton visual to red when there are no markers contributing to a joint.
Display Coordinate axes of each joint.
Displays the lines between labeled skeleton markers and corresponding expected marker locations.
Displays lines between skeleton markers and their joint locations.
Reconstruction settings configured under the apply only to the real-time reconstruction in Live mode. Parameters for post-processing reconstruction pipelines can be modified from corresponding Take properties under the .
The real-time reconstruction settings can be accessed in the Reconstruction tab under the pane.
Reconstruction in motion capture is a process of deriving 3D points from 2D coordinate information obtained from captured images, and the Point Cloud is the core engine that runs the reconstruction process. The reconstruction settings define the parameters of the point cloud engine, and they can be modified to optimize the acquisition of 3D data points.
For more information on how to utilize the reconstruction settings, visit page.
Due to inherent errors in marker tracking, rays generally do not converge perfectly on a single point in 3D space, so a tolerance value is defined. This tolerance, called the residual, represents one of the reconstruction constraints. If a ray could be defined as an infinite series of points aligned in a straight line, two or more rays that have points within the defined residual range (in mm) will form a marker.
Enable Point Cloud Reconstruction
Default: ON
Default: 10.00 mm
The residual value sets the maximum allowable offset distance (in mm) between rays contributing to a single 3D point.
When the residual value is set too high, unassociated marker rays may contribute to marker reconstruction, and non-existing ghost markers may be reconstructed. When this value is set too low, the contributing rays within a marker could reconstruct multiple markers where there should only be one.
Choosing a good Residual Value
Depending on the size of markers used, the contributing rays will converge with a varying tolerable offset. If you are working with smaller markers, set the residual value lower. If you're working with larger markers, set this value higher because the centroid rays will not converge as precisely as the smaller markers. A starting point is to set the residual value to the diameter of the smallest marker and go down from there until you start seeing ghost markers. For example, when 3 mm and 14 mm markers are captured in a same volume, set the residual value to less than 3 mm. The ghost markers can appear on larger markers if this value is set too low.
Default: None — the calibration solver will set a suggested distance based on the wanding results, but this can still be adjusted by the user after calibration.
This sets the maximum distance, in meters, a marker can be from the camera to be considered for 3D reconstruction. In very large volumes with high resolution cameras, this value can be increased for a longer tracking range or to allow contributions from more cameras in the setup. This setting can also be reduced to filter out longer rays from reconstruction. Longer rays generally produce less accurate data than shorter rays.
When capturing in a large-size volume with a medium-size – 20 ~ 50 cameras – camera system, this setting can be adjusted for better tracking results. Tracking rays from cameras at the far end of the volume may be inaccurate for tracking markers on the opposite end of the volume, and the unstable rays may contribute to ghost marker reconstructions. In this case, lower the maximum ray length to restrict reconstruction contributions from cameras tracking at long distances. For captures vulnerable to frequent marker occlusions, adjusting this constraint is not recommended since more camera coverage is needed for preventing the occlusions. Note that lowering this setting can take a toll on performance at higher camera counts and marker counts because the solver has to perform numerous calculations per second to decide which rays are good.
Default: 0.2 m
This sets the minimum distance, in meters, between a marker and a camera for the camera to contribute to the reconstruction of the marker. When ghost markers appear close to the camera lens, increase this setting to restrict the unwanted reconstructions in the vicinity. But for close-range tracking applications, this setting must be set low.
Default: 2 rays
This sets the required minimum number of cameras that must see a marker for it to be reconstructed.
For a marker to be reconstructed, at least two or more cameras need to see the marker. The minimum rays setting defines the required number of cameras that must see a marker for it to be reconstructed. If you have 4 cameras and set this to 4, all cameras must see the marker; otherwise, the marker will not be reconstructed and the contributing rays will become the untracked rays.
When more rays are contributing to a marker, more accurate reconstruction can be achieved, but generally, you don't need all cameras in a setup to see a marker. If you have a lot of cameras capturing a marker, you can safely increase this setting to prevent false reconstructions which may come from 2 or 3 rays that happen to connect within the residual range. However, be careful when increasing this setting because a high number of minimum rays requirement may decrease the effective capture volume and increase the frequency of marker occlusions during capture.
Default: Passive
Default: 12
This setting is available only if marker labeling mode is set to one of the active marker tracking modes. This setting sets the complexity of the active illumination patterns. When tracking a high number of rigid body, this may need to be increased to allow for more combinations of the illumination patterns on each marker. When this value is set too low, the active labeling will not work properly.
Default: Disabled
This property was called Ray Ranking in older versions.
Default: 4
This setting enables the Ray Ranking, which calculates quality of each ray to potentially improve the reconstruction. Setting this to zero means that ray ranking is off, while 1 through 4 set the number of the evaluation iterations; 4 being 4 iterations. Setting this value to the max of 4 will slow down the reconstruction process but will produce more accurate results.
The Ray Ranking increases the stability of the reconstruction but at a heavy performance cost. The ray quality is analyzed by comparing convergence of rays that are contributing to the same marker. An average converging point is calculated, and each ray is ranked starting from the one closest to the converging point. Then, each ray is weighed differently in the Point Cloud reconstruction engine according to the assigned rankings.
This setting is useful especially when there are multiple rays contributing to a marker reconstruction. If you're working with small to medium marker counts, enabling this will not have an evident improvement on performance. Also, when precise real-time performance is required, disable this setting especially for a setup with numerous cameras.
Default: 0 pixels
Establishes a dead zone, in pixels, around the edge of the 2D camera image. Any 2D objects detected within this gutter will be discarded before calculating through the point cloud. In essence, it is a way of getting only the best data of the captured images, because markers seen at the edges of the camera sensor tend to have higher errors.
This setting can be increased in small amounts in order to accommodate for cases where lens distortions are potentially causing problem with tracking. Another use of the setting for limiting the amount of data going to the reconstruction solver, which may help when you have a lot of markers and/or cameras. Be careful adjusting this setting as the trimmed data can't be reacquired in post-processing pipelines.
Default: 5 degrees
The minimum allowable angle – in degrees from the marker's point of view – between the rays to consider them valid for marker reconstruction. This separation also represents the minimum distance required between the cameras. In general, cameras should be placed with enough distance in between in order to capture unique views on the target volume. For example, if there are only two cameras, an ideal reconstruction would occur when the cameras are separated far enough so the rays converge with a 90 degree of an incident angle from the perspective of the reconstructed marker(s).
When working with a smaller-sized system with a fewer number of cameras, there will be only a limited number of markers rays that can be utilized for reconstruction. In this case, lower this setting to allow reconstruction contributions from even the cameras that are in close vicinity to each other.
Default: False
When the Rigid Body Marker Override is set to True, Motive will replace observed 3D markers with the rigid body's solution for those markers. 3D tracking data of reconstructed and labeled trajectories will be replaced by the expected marker locations of the corresponding rigid body solve.
Default: True
When this feature is enabled, Motive uses expected marker locations from both the model solve and the trajectory history to create virtual markers. These virtual markers are not direct reconstructions from the Point Cloud engine. When the use of smart markers is enabled, rigid body and skeleton asset definitions will also be used in conjunction with 2D data and reconstructed 3D data to facilitate reconstruction of additional 3D marker locations to improve tracking stability. These virtual markers are created to make live data match recorded data in situations where model and history data helped to improve the live solve
Using the asset definitions in obtaining the 3D data could be especially beneficial for accomplishing stable tracking of the assets in low camera count systems where all of the reconstructions may not always meet the minimum required tracked ray requirements.
Usage note. In 2.0, trajectories of virtually created markers on a skeleton segment may not get plotted on the graph view pane.
Default: true
Default: 20
Sets the required minimum number of frames without occlusion for a tracked marker to be recognized as the same reconstruction to form a trajectory. If a marker is hidden, or occluded, longer than the defined number of frames, then the trajectory will be truncated and the marker will become unlabeled.
Default: 0.06 m
To identify and label a marker from one frame to the next, a prediction radius must be set. If a marker location in the subsequent frame falls outside of the defined prediction radius, the marker will no longer be identified and become unlabeled.
For capturing relatively slow motions with tight marker clusters, limiting the prediction radius will help maintaining precise marker labels throughout the trajectory. Faster motions will have a bigger frame to frame displacement value and the prediction radius should be increased. When capturing in a low frame rate settings, set this value higher since there will be bigger displacements between frames.
Bound Reconstrutction
(Default: False) When set to true, the 3D points will be reconstructed only within the given boundaries of the capture volume. The minimum and maximum boundaries of X/Y/Z axis are defined in the below properties.
Visible Bounds
Bounds Shape
(Default: Cuboid) This setting selects the shape of the reconstruction bound. You can select from cuboid, cylinder, spherical, or ellipsoid shapes and the corresponding size and location parameters (e.g. center x/y/z and width x/y/z) will appear so that the bound can be customized to restrict the reconstruction to a certain area of the capture volume.
Pose Detection
Default: TruePose detection improves the stability of skeleton tracking by detecting standing poses. For multi-skeleton captures, this feature may increase the skeleton solve latency.
Minimum Key Frames
Default: 2This setting sets the required minimum number of frames for each trajectory in the recorded 3D data. Any trajectories with a length less than the required minimum will be discarded from the 3D data after running the auto-labeling pipeline.
Auto-labeler Passes
Default: 1The number of iterations for analyzing detected marker trajectories for maintaining constant marker labels. Increasing this setting can improve the marker auto-labeling, but more iterations will require more time and computation effort to complete the auto-labeling.
Rigid Body Assisted Labeling can be used to optimize the labeling of markers within a region defined by a rigid body. The first step in using this feature is to create a rigid body from markers that are visible and rigidly connected. The example shown in the figure below demonstrates this for hand tracking. Five white markers are selected on the top of the wrist - which is rigidly defined. The black markers on the fingers are not rigidly defined in any fashion but are within the boundary of the Rigid Body Assisted Labeler. Labeling continuity is improved for the markers on the fingers which are given automatic labels.Tracking of organic or flexible objects - that do not have a tracking models like the face and hand, are good candidates for Rigid Body Assisted Labeling.
Rigid Body-Assisted Labeling
Default: FalseEnable or disable rigid body assisted labeling feature.
Rigid Body Volume Radius
Default: 300 mmThe rigid body volume radius defines the region of space where the rigid body assisted labeling is applied. Increasing this radius will increase time needed for the auto-labeling so care should be made when setting this property.
Prediction Radius (mm)
Default: 10 mmThe prediction radius defines the size of the bounding region used to label markers. When labeling a marker from one frame to the next, a bounding region, relative to the rigid body, is created around each labeled marker. The labeling continuity is restricted to the bounding region from frame to frame. Increasing this can allow markers to swap if there are occlusions in the data. Decreasing this restricts labeling from frame to frame but may lead to an increase in broken trajectories.
Maximum Assisted Labeling Gap
Default: 30 framesThe maximum gap frames property defines the maximum number of frames a marker can be hidden before it is truncated or unlabeled. Increase this value if larger gaps are to be anticipated. Increasing the assisted labeling gap will increase the processing time of reconstruction.
Discard External Markers
Default: FalseDiscards markers outside of rigid body volume. Enabling this property will eliminate marker reconstructions outside of the region defined by the Rigid Body Volume.
Dynamic Constraints
Default: NonePrevents the rigid body from moving/rotation more than specified amount per frame.
Max Translation (mm)
Default: 100Distance for Dynamic Translation Constraint option.
Max Rotation (deg)
Default: 30Angle for the Dynamic Rotation Constraint option.
Minimum Tracking Frames
Default: 20Dynamic constraints are enabled when the rigid body is consecutively tracking more than this frame count.
Marker Filter Diameters
Default: FalseMarkers less than this diameter will not be used for rigid body tracking.
Minimum Diameter (mm)
Default: 10Diameter used for Marker Filter Diameter option.
View/hide
View/hide
Enable/Disable
List of involved in the Take.
Timecode values (SMTPE) for frame alignments, or reserving future record trigger events for timecode supported systems. Camera systems usually have higher framerates compared to the SMPTE Timecode. In the triggering packets, the always equal to 0 at the trigger.
List of involved in the Take
Timecode values (SMPTE) for frame alignments. The value is zero.
C-Motion wiki:
Runs local or over network. Supports Unreal Engine versions up to 4.17. This plugin allows streaming of rigid bodies and integration of HMD tracking within Unreal Engine projects. For more details, read through the documentation page.
Runs local or over network. This plugin allows streaming of tracking data and integration of HMD tracking within Unity projects. For more details, read through the documentation page.
Enable Marker Size under the visual aids () in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the size filter.
Enable Marker Circularity under the visual aids in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the circularity filter.
Monitoring marker rays is an efficient way of inspecting reconstruction outcomes. The rays show up by default, but if not, they can be enabled for viewing under the visual aids options under the toolbar in 3D viewport. There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.
Under the Data pane, click to access the menu options and check the 2D Mode option.
Reconstruction in motion capture is a process of deriving 3D points from 2D coordinate information obtained from captured images, and the Point Cloud is the core engine that runs the reconstruction process. The reconstruction settings in the Application Settings modifies the engine's parameters for real-time reconstructions. These settings can be modified to optimize the quality of reconstructions in Live mode depending on the conditions of the capture and what you're trying to achieve. Use the to live-monitor the reconstruction outcomes from the configured settings.
See page for more information on each setting.
For details on the reconstruction workflow, read through the page.
Reconstruction settings for post-processing reconstruction pipelines for recorded captures can be modified under the corresponding Take properties in the .
Enable/Disable auto-archiving of Takes when
Motive persists all of the session folders that are imported into the pane so that the users don't have to re-import them again after closing out of the application. If this is set to false, the session folders will no longer be persisted, and only the default session folder will always be loaded.
Controls the color of the (Prime Series cameras only). Options include distinct indications for Live, Recording, Playback, Selection and Scene camera statuses, and you can choose the color for the corresponding camera status.
A list of the default Rigid Body creation properties is listed under the Rigid Bodies tab. These properties are applied to only Rigid Bodies that are newly created after the properties have been modified. For descriptions of the Rigid Body properties, please read through the page.
Note that this is the default creation properties. Asset specific Rigid Body properties are modified directly from the .
A list of the default Skeleton display properties for newly created Skeletons is listed under the Skeletons tab. These properties are applied to only Skeleton assets that are newly created after the properties have been modified. For descriptions of the Skeleton properties, please read through the page.
Note that this is the default creation properties. Asset-specific Skeleton properties are modified directly from the .
The Point Cloud reconstruction engine converts two-dimensional point from camera images into coordinates in a three-dimensional space through triangulation. All cameras should be calibrated for the engine to function properly (see ). The triangulation of a marker occurs when a minimum of 2 rays intersect. Rays are generated from the objects present on a camera image and they resolve into a 3D point when the conditions defined by the reconstructions settings are met. These rays can be seen from the when the tracked rays and untracked rays are enabled from the visibility settings.
This toggles on and off. It is recommended to turn this off if computer resource need to be dedicated to 2D recording. When disabled, you will not be able to see 3D data from the Live mode nor from the recorded 2D data.
The residual can also be viewed as the minimum distance between two markers before they begin to merge. If two markers have a separation distance smaller than the defined residual (in mm), the contributing rays for each marker will be merged and only one marker will be reconstructed, which is undesirable. Remember that for a 3D point to be reconstructed, it needs to have at least two rays contributing to a marker depending on the setting.
If calibration quality is not very good, you may need to set this value higher for increased tolerance. This will work only if your markers are further apart in the 2D views throughout the given marker motion. This is because there is more errors in the system. However, for best results, you should always work with a calibration with minimal error (See ).
Configures Motive for tracking either the passive markers, the synchronized active markers, or both. See for more information.
Enable or disable continuous calibration. When enabled, Motive will continuously monitor the calibration quality and update it as necessary. For more information, refer to the page.
On the other hand, when working with a large system setup with a lot of cameras, you can set this value a bit higher to limit marker rays that are coming from the cameras that are too close together. Similar vantages obtained by the cameras within vicinity do not necessarily contribute unique positional data to the reconstruction, but they only increase the required amount of computation. Rays coming from very close cameras may increase the error in the reconstruction. Better reconstruction can only be achieved with a good, overall camera coverage (See ).
This is applicable only for rigid bodies using , and when the Use Smart Markers is enabled.
More specifically, for rigid body tracking, Motive will utilize untracked rays along with the rigid body asset definition to replace the missing markers in the 3D data. In order to compute these reconstructions, the rigid body must be using the tracking algorithm. For skeleton tracking, only the asset definitions are used to approximate virtual reconstruction at the location where the occluded marker was originally expected according to the corresponding skeleton asset.
When set to true, Motive will recognize the unique illuminations from synchronized active markers and perform active labeling on its reconstructions. If you are utilizing our active marker solution, this must be set to true. For more information about active labeling, read through the page.
Visualize the reconstruction bounds in the .
After markers have been reconstructed in Motive, they must be labeled. Individual markers can be manually labeled, but the auto-labeler simplifies this process using the . Rigid body and skeleton assets, created in Motive, saves their marker arrangement definitions and uses them to auto-label corresponding marker sets within the Take. The , is a process of associating 3D marker reconstructions in multiple captured frames by assigning marker labels within the defined constraints. After the labeling process, each of the labeled markers provides respective 3D trajectories throughout the Take.
In Motive, the Application Settings can be accessed under the View tab or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main Toolbar.
The Mouse tab under the application settings is where you can check and customize the mouse actions to navigate and control in Motive.
The following table shows the most basic mouse actions:
You can also pick a preset mouse action profiles to use. The presets can be accessed from the below drop-down menu. You can choose from the provided presets, or save out your current configuration into a new profile to use it later.
The Keyboard tab under the application settings allows you to assign specific hotkey actions to make Motive easier to use. List of default key actions can be found in the following page also: Motive Hotkeys
Configured hotkeys can be saved into preset profiles to be used on a different computer or to be imported later when needed. Hotkey presets can be imported or loaded from the drop-down menu:
The Assets pane in Motive lists out all of the assets involved in the Live, or recorded, capture and allows users to manage them. This pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
A list of all assets associated with the take is displayed in the Assets pane. Here, view the assets and you can right click on an asset to export, remove, or rename selected asset from the current take.
You can also enable or disable assets by checking or unchecking, the box next to each asset. Only enabled assets will be visible in the 3D viewport and used by the auto-labeler to label the markers associated with respective assets.
Export Rigid Body / Export Skeleton
Exports selected rigid body into Motive trackable files (TRA). Exports selected skeleton into either Motive skeleton file (SKL) or a FBX file.
Remove Asset
Removes the selected asset from a project.
Rename Asset
Renames the selected asset.
Export Markers
Exports skeleton marker template XML file. Exported XML files can be modified and imported again using the Rename Markers or when creating the skeleton in the Skeleton pane.
Rename Markers
Imports skeleton marker template XML file onto the selected asset. If you wish to apply the imported XML for labeling, all of the skeleton markers need to be unlabeled and auto-labeled again.
Update Markers
Imports the default skeleton marker template XML files. This feature can be used to update skeleton assets that are created before Motive 1.10 to include marker colors and sticks.
Recalibrate From Markers
Re-calibrates an existing skeleton. This feature is essentially same as re-creating a skeleton using the same skeleton Marker Set. See Skeleton Tracking page for more information on using the skeleton template XML files.
Generate Markers
This option colors the labeled markers and creates marker sticks that inter-connects between each of consecutive labels. More specifically, this will modify the marker XML file. It adds values to the color attributes and generates Marker Stick elements so that users can export the markers and easily modify the colors and sticks as needed. For more information: Marker XML Files.
This section of the application settings is used for configuring the properties for all of the cameras in the tracking group. The settings include display options, masking properties, but most importantly, the 2D Filter settings for the camera system which basically determines which reflections are considered as marker reflections from the camera view.
When a frame of image is captured by a camera, the 2D Object Filter is applied. By judging on sizes and shapes of the detected reflections, this filter determines which of them can be accepted as marker reflections. Parameters for the 2D Object filter are configured in the Devices pane under the Filters section.
For Motive 2.0 and above. The 2D Object filter settings in the Reconstruction Settings pane have been moved over to the Devices pane.
Filter Type
Default: Size and RoundnessToggles 2D object (Size and Roundness) filtering on or off.This filter is very useful for filtering out extraneous reflections according to their characteristics (size and roundness) rather than blocking pixels using the masking tool or the Block Visible feature. Turn off this setting only when you want to use every 2D pixels above the brightness threshold from camera views. When there are extraneous or flickering reflections in the view, turn on the filter to specify and consider reflections only from markers. There are multiple filtering parameters to distinguish the marker reflections. Given that there are assumed marker characteristics, filtering parameters can be set. The size parameters can be defined to filter out extra-small or extra-large reflections that are most likely from extraneous sources other than markers. Non-circular reflections can be ignored assuming that all reflective markers have circular shapes. Note that even when applying the size and roundness filter, you should always Block Visible when you calibrate.
Min Thresholded Pixels (pixels)
Default: 4 pixelsThe minimum pixel size of a 2D object, a collection of pixels grouped together, for it to be included in the Point Cloud reconstruction. All pixels must first meet the brightness threshold defined in the Cameras pane in order to be grouped as a 2D object. This can be used to filter out small reflections that are flickering in the view. The default value for the minimum pixel size is 4, which means that there must be 4 or more pixels in a group for a ray to be generated.
Max Thresholded Pixels (pixels)
Default: 2000 pixelsThe maximum size of a 2D object, in pixels, in order for it to be included in point cloud reconstruction. Default is 2000 pixels which basically means that all of detected large reflections smaller than 2000 pixel-size will be included as a 2D object. Use this to filter out larger markers in a variable marker capture. For instance, if you have 4 mm markers on an actor's face and 14 mm markers on their body, use this setting to filter out the larger markers if the need arises.
Circularity
Default: 0.6This setting sets the threshold of the circularity filter. Valid range is between 0 and 1; with 1 being a perfectly round reflection and 0 being flat. Using this 2D object filter, the software can identify marker reflections using the shape, specifically the roundness, of the group of thresholded pixels. Higher circularity setting will filter out all other reflections that are not circular. It is recommended to optimize this setting so that extraneous reflections are efficiently filtered out while not filtering out the marker reflections. When using lower resolution cameras to capture smaller markers at a long distance, the marker reflection may appear to be more pixelated and non-circular. In this case, you may need to lower the circularity filter value for the reflection to be considered as a 2D object from the camera view. Also, this setting may need to be lowered when tracking non-spherical markers in order to avoid filtering the reflections.
Intrusion Band
Default: 0.5 (Pixels)The intrusion band feature allows cameras to recognize reflections that are about to be merged and filter them out before it happens. This filter occurs before the circularity filter, and these reflections are rejected before the thresholded pixels merge. This is useful for improving the accuracy of the tracking, because bright pixels from close by reflections may slightly shift the centroid locations. The intrusion band value is added to the calculated radius of detected markers to establish a boundary, and any extraneous reflections intruding the boundary is considered as the intrusion and gets omitted. When an intrusion happens, both intruding reflection and detected marker reflection will be filtered out.
Garyscale Floor
Default: 48The grayscale floor setting further darkens pixels with lower brightness intensity values.
Object Margin
Default: 2 (Pixels)The object margin adds an additional margin on top of the intrusion band for filtering out merged reflections. Lowering this value will better detect close-by reflections, but may decrease the accuracy of the centroid positions as a tradeoff.
Name
Sets the name for the selected camera group.
Camera Color
Sets the color for camera group members as they appear in the 3D viewport. Color values are input as standard RGB triplets.
Visible Cameras
Selects whether cameras in the group are displayed in the viewport.
Show All Color Camera
By default, only the Prime Color cameras that are equipped with the IR filter switcher are shown in the 3D viewport and Prime Color cameras without filter switcher are hidden. When this setting is set to true, all of the Prime Color cameras will show up in the 3D viewport.
Show Capture Volume
Selects whether the capture volume (defined as capable of tracking a single marker) is displayed in the viewport. Enabling this will allow the volume to be displayed as a wire cage around the ground plane where multiple cameras fields of view intersect. Valid options are True, False (default).
Camera Overlap
Sets the minimum camera overlap necessary for a region to be visualized as part of the capture volume. Higher numbers represent more camera coverage, but they will tend to reduce the size of the visualized capture volume. Valid range is 1 to 25 (default 3).
Volume Resolution
Sets the resolution of the capture volume visualization. A higher number represents a more detailed visualization. Valid range is 1 to 120 (default 50).
FOV Intensity
Sets the opacity of the FOV visualization. A higher value represents a more opaque volume visualization. Valid range is 1 to 100 (default 50).
Opacity
Sets the opacity of the volume visualization. A value of 1 is transparent and 100 is opaque. Valid range is 1 to 100 (default 100).
Synchronization Control
Determines how late camera frames are dealt with. Timely Delivery will drop late frames, which is ideal for real-time applications where data completeness is secondary to timeliness. Complete Delivery will hold up processing of frames when a frame is late. Automatic, which is the default and recommended setting, runs in Timely Delivery mode until it gets a non-trivial percentage of late frames, at which point it will automatically switch to Complete Delivery.
Shutter Offset
Delays the shutter timing of selected tracking camera group for N microseconds.
Mask Width (pixels)
Sets the extra pixel coverage (width) for masking visible markers when the mask visible function is used. A larger number will block a wider grouping of pixels simultaneously. Valid range is determined by the resolution of the cameras.
Mask Height (pixels)
Sets the extra pixel coverage (height) for masking visible markers when the mask visible function is used. A larger number will block a wider grouping of pixels simultaneously. Valid range is determined by the resolution of the cameras.
Calibration Pane is used for calibrating the mocap system through the calibration wanding process. This page provides descriptions on the fields and settings included on the calibration pane. Read through the Calibration workflow page to learn about the calibration process in detail.
In Motive, the Calibration pane can be accessed under the View tab or by clicking icon on the main toolbar.
Mask Visible
This masks all pixels that are above the set threshold. By default, the threshold is set to 200 but this can be changed by the user in the cameras pane. Pixels in a camera image will have a grayscale value between 0 and 255 inclusively. If the default threshold is used, a pixel that is above 200 will be blocked along with the surrounding pixels.
This feature is a quick way to block data that is not needed and can be used in tandem with manual masking.
Start Wanding
This will start recording wand samples. After masking the cameras, press the start wanding button to begin your wand wave.
Reset
This will stop wand acquisition and the calibration solver.
Calibration Type
You can selected different calibration types before wanding: Full, Refine, Refine Extrinsic Only, Visualize Only.
Full: Calibrate cameras from scratch, discarding any prior known position of the camera group or lens distortion information. A Full calibration will also take the longest time to run.
Refine: Adjusts slight changes on the calibration of the cameras based on prior calibrations. This will solve faster than a Full calibration. Only use this if your previous calibration closely reflects the placement of cameras. In other words, Refine calibration only works if you do not move the cameras significantly from when you last calibrated them. Only slight modifications can be allowed in camera position and orientation, which often occurs naturally from the environment such as mount expansion.
Visual: Only render the calibration solution visual and will not calibrate your cameras. This can be used to validate the quality of existing calibration by comparing position and orientation of the cameras.
OptiWand
This options allows the user to select which calibration wand their using. The dimension must match the wand exactly in order for the system to be properly calibrated.
Calibration Wands:
Wands come in 250, 400 and 500 mm sizes. Custom wands can also be used. A 250 mm wand should be used for smaller volumes or for systems that have cameras with lenses that have larger focal lengths. The reason being that the cameras will not be able to see all 3 markers on a 500 mm wand if the wand is close to the camera or the camera has a very narrow view angle due to it's lens type.
If your camera's are not collecting wand samples while wanding, you may need to use a shorter wand. A 250 mm wand is good to use in most small to medium volumes. When making a calibration wand, understand that the system accuracy will be tied directly to the accuracy at which the wand is constructed. A poorly measure wand will result in poor calibration results. To make a wand all that is needed is 3 markers at set distances in a line.
Wand Length (mm)
This can be set when creating a custom wand and is the measure of the distance between the two outer marker centers. The accuracy of this measurement will directly impact camera calibration results, so be careful when creating and setting a custom wand.
Center Distance (mm)
Defines the distance in millimeters between the outer post and the center post (use the shorter of the two center offset distances). For use with custom calibration wands.
Initiates the calibration solver. Press this button after collecting enough wand samples.
Applies the calibration results to the cameras. Once pressed, this button will bring up a calibration result box. If the calibration result is satisfactory, press Apply. After you save the wanding the camera calibration pane will switch over to the Ground Plane tab so you can set the global origin.
While wanding the bottom part of the Camera Calibration Pane will show a table of the number of samples collected for each camera in the system. The samples will increase as the wand is waved in the capture volume.
The calibration results will show in the Calibration Engine portion of the Calibration pane. The elapsed time of the calibration solver is shown at the bottom of the list. If no calibration is being processed this area will remain blank. However, when a wanding or a calibration solver is underway, this field will be populated with a table showing the live results of the solution. The components of that table are described below.
As the calibration proceeds through the various phases of the solution you may notice the results slowing when a phases is finishing. Let the calibration finish all phases of the calibration. Once the solver converges on an appropriate solution, press the Apply Result button to apply the solution to the cameras. If you are unsatisfied with the results, hit reset near the top of the pane to cancel the results.
Ground plane tab under the calibration pane
Set the location of the global origin. Use an 'L' Frame or 3 markers in the shape of an 'L'. If only 3 markers are seen by the cameras, you can simply press 'Set Ground Plane'. If more markers are in view then you can select the 3 markers you want to use in the 3D viewport and then press 'Set Ground Plane'.
Motive 1.6 and earlier : L-Frame long (marked Z) "leg" interpreted as -Z, L-Frame short (unlabeled) leg interpreted as +X Motive 1.7 : L-Frame long (marked Z) "leg" interpreted as +Z, L-Frame short (unlabeled) leg interpreted as -X
In this section you can assign the Vertical Offset value. The Vertical Offset (mm) is the difference in height (y-direction) between the L frame vertex marker and the actual ground plane. Use positive values to set the global origin below the 3 marker vertex and negative values to set the global origin above the 3 marker vertex. Motive will recognize calibration squares, unless custom designed, and will ask to correct the offset value before the calibration process. However, the global origin is arbitrary and can be placed anywhere the user desires.
The Ground Plane Refinement feature can be used to refine the ground plane. You can select multiple reconstructions and use the corresponding 3D points to level the ground plane. This refinement feature assumes that the selected markers are all placed on the ground with a given vertical offset (mm) between the marker centroids to the ground surface, and then they use the selected samples to refine the ground plane.
Especially in large-scale volumes where the floor is not uniform, defining a ground plane using the calibration square may not be sufficient because it would be referencing just a local part of the volume. For such cases, this feature allows users to further refine the ground plane.
For example, you can evenly spread out 4 or more spherical markers throughout the floor. Specify the marker centroid to ground vertical offset distance, which would be the radius of the marker in this case. Then press the Ground Refinement button. This will change the vertical location of the floor, ensuring all of the markers are above the floor.
The Volume Translation modifies the global origin after it has been set.
Simply enter the amounts you want to translate the origin in the X, Y and/or Z direction and press the Apply Translation button. There is no limit to the number of translations that can be applied and there is no memory once a translation is applied. To revert a translation, simply translate the origin be an equivalent amount in the opposite direction. If there is existing 3D data in the Take, you will need to reconstruct a new set of 3D data from recorded 2D data after the translation has been applied.
The Volume Rotation is use to apply a rotational offset to the current global origin. If there is existing 3D data in the Take, you will need to reconstruct a new set of 3D data from recorded 2D data after the rotation has been applied.
Camera Health Info
Displays various assessments of camera health over the 2D camera views, for troubleshooting performance issues. If any performance issues is detected, corresponding problem will be listed at the bottom of the 2D camera view.
Reticles
When enabled, renders a crosshair on top of the 2D camera views, which can be useful for camera aiming.
Masks
Enables displaying masked area on the 2D camera views, in red.
Backproject Markers
Enables markers selected from the 3D Perspective View to be also highlighted with yellow crosshairs in the 2D camera view, based on calculated position. Crosshairs that are not directly over the marker tend to indicate occlusion or poor camera calibration.
Marker Filter
When enabled, filtered reflections will be labeled with the corresponding object filters in the 2D camera view.
Marker Coordinates
Displays 2D coordinate of the detected object centroids within the captured image, in pixels.
Marker Centroids
Displays crosshairs on marker centroids in the 2D view.
Marker Boundaries
Displays a box around each marker, indicating its calculated edges after the brightness threshold.
Marker Circularity
Displays the roundness of an object. A value of 1 indicates maximum roundness, while a value of 0 indicates no roundness.Pixel inspector enabled in 2D view
Marker Aspect Ratio
Displays the ratio of object width to object height as a decimal, resolved to .01 pixel.
Marker Size
Displays the area of the object in pixels, accurate to .01 pixel.
Marker Label
Displays the pre-identified labels assigned to 2D objects for initial tracking from frame to frame.
Pixel Inspection
Displays X,Y coordinates for cursor location when hovering over a camera, and pixel brightness for selected pixels when a region is drag-selected. Inspecting pixel brightness can be useful during camera focusing, tuning, and aiming.
Texture Streaming
Disables or enables texture streaming of reference videos on the 2D camera viewport.
Visual FPS Target
Sets a maximum frame display rate for the 2D camera view.
Background Color
Selects the color to display in the viewport between camera panes.
Camera Info
Enables text overlay of pertinent camera information on the 2D Multi Camera view panes. Displayed information includes image mode, time, data rate, frame ID, visual FPS, number of objects, camera serial, exposure value, threshold value, IR intensity value, internal temperature, and camera sync mode.
Show Distortion
Displays each camera’s lens distortion map.
Overlay Color
Selects the color of the lens distortion map display.
Overlay Transparency
Selects the transparency percentage for the lens distortion map.
Overlay Resolution
Selects the level of details for displaying the lens distortion. More specifically, it sets number of distortion grids on the width and height of the distortion map.
Show as Predistortion
Selects whether the map is shown as pre-distorted or distorted.
Display Mode
Sets levels of details for the markers displayed in the multi-camera 2D view. Available modes are Frame Buffer, Marker Centers, and Automatic LOD modes. Default is Automatic LOD.
Automatic LOD switches between Frame Buffer mode and Marker Centers mode depending on the zooming of the 2D camera view, or the LOD threshold setting.
Frame Buffer mode pushes the entire camera frame to the video card for scaling and display. It provides verbose information on detected reflections and centroids, but it is data intensive at the same time.
Marker centers mode merely defines a white circle of the rough size and shape of the marker as it would appear. More specifically, it displays the reflections by its size and location and is significantly less hardware intensive.
Pane Gap
The distance between 2D Multi View camera panes, in pixels.
LOD Threshold
The size, zoom percentage, at which the system switches between Marker Centers and Frame Buffer mode.
Raster Priority
Defines the update rate for the camera pixel data shown in the 2D camera views. The priority value ranges from 1 - 6, and a higher priority indicates a higher rate of update.
Camera Names
Displays the camera model, serial, and master/slave status above and below camera objects.
Text Size
Adjusts the size of the camera name text.
Solid Cameras
Setting this to true disables camera object transparency in the 3D Perspective View.
Labeled Marker Color
Sets the color for labeled markers in the 3D view port.
Active Marker Color
Sets the color for active markers in the 3D viewport.
Unlabeled Marker Color
Sets the color for Unlabeled markers in the 3D view port.
Selection Color
Sets the color of selections in the 3D view port.
Marker History
Displays a history trail of marker positions over time.
Selected History Only
Determines whether marker history will be shown for selected markers or all markers.
Assigned Markers
Enables or disables display of assigned markers (also called solved, expected marker positions) on rigid body or skeleton assets
Show Marker Count
Displays the number of markers detected by the system as well as the number of markers selected at the bottom right corner of the perspective view.
Show Marker Labels
Displays marker labels for selected markers in the perspective view.
Show Timecode
Enables or disables timecode values displayed on the 3D viewport. Timecode will be available only when the timecode signal is inputted through the eSync.
Show Marker Infos
When this is set to true. 3D positions and estimated diameter of selected markers will be displayed on the 3D viewport.
Display mode
Toggles camera numbers on and off in the 3D Perspective View.
Marker Diameter
Determines whether marker sizes in the 3D Perspective View are represented by the calculated size or overwritten with a set diameter.
Diameter (mm)
Sets the diameter in millimeters for marker sizes in the 3D Perspective View, if Marker Diameter is set to Set Diameter.
Background Color
Selects the background color displayed in the 3D Perspective View.
Fog Effect
Turns a gradient “fog” effect on in the 3D Perspective View.
OptiTrack Logo
Overlays the OptiTrack logo over top of the 3D Perspective View.
Grid Color
Selects the color of the ground plane grid in the 3D Perspective View.
Grid Transparency
Selects the level of transparency applied to the ground plane grid in the 3D Perspective View.
Grid Size
Selects the size of the ground plane grid in the 3D Perspective View. Specifically, it sets the number of grids (20cm x 20cm) along the positive and negative direction in both the X and Z axis.
Coordinate Axis
Displays the coordinate axis in the 3D view port.
Video Overlay Display FPS
Controls of often scene video overlays are updated for display.
Undistort Video Overlay
Removes distortions from the grid when displaying the video distortion overlay in the reference video.
Show Tracked Rays
Displays tracked rays in the view port. Tracked rays are marker rays with residual values less than the Maximum Residual setting from the reconstruction pane. In other words, tracked rays are marker rays that are contributing to 3D reconstructions.
Show Untracked Rays
Displays the untracked rays in the view port. Untracked rays are the rays which start from each camera and goes through the detected 2D centroids, but fails to be reconstructed in the 3D space. When there are several untracked rays in the capture, it is usually a sign of bad calibration or extreme reconstruction settings.
Show Missing Rays
Displays the missing rays in the view port. Missing rays form when tracking a rigid body or a skeleton, and it indicates expected marker rays that are not detected from the camera view but expected from the rigid body or the skeleton solve.
Show Two Marker Distance
Enabling this will display distance between two markers in the Perspective View pane. Two markers must be selected to calculate the distance.
Show Three Marker Angle
Enabling this will measure an angle formed by three markers in the Perspective View pane. Three markers must be selected, and the calculated angle will follow the selection order. When all three markers are selected at once, the widest angle will be measured.
Show Marker Colors
When labeled, each skeleton marker is colored as defined in the corresponding markerset template. Enabling this setting will color the markers for better identification of the marker labels.
Show Marker Sticks
Setting this to true will show marker sticks on skeleton assets for clearer identification of skeleton markers and segments in each individual actor. Setting this to true will reveal marker sticks in 3D data.
Show Selected Residual
Displays the offset distance between rays converging into a marker. The residual value will be displayed on top of the view pane. Note that ray information will be displayed only in the 2D data.
Tracked Ray Color
Sets the color for Tracked Rays in the 3D Perspective View.
Untracked Ray Color
Sets the color for untracked Rays in the 3D Perspective View.
Missing Ray Color
Sets the color for Missing Rays in the 3D Perspective View.
Tracked Ray Transparency
Sets the level of transparency for Tracked Rays.
Untracked Ray Transparency
Sets the level of transparency for Untracked Rays.
Missing Ray Transparency
Sets the level of transparency for Missing Rays.
Missing Ray Threshold
Sets the distance in millimeters that a 2D marker must be from an expected location before declaring the marker missing.
Color Scheme
Toggles the theme for the timeline playback graph between light and dark.
Background Color
Specifies the background color for the timeline playback graph.
Autoscale
Automatically scales trajectory graphs in the Timeline pane.
Preferred Live Layout
Preferred Layout to be used for the Live mode graph pane.
Preferred Edit Layout
Preferred Layout to be used for the Edit mode graph pane.
Scope Duration
This setting sets the time scope of the plotted graphs in Live mode. By default, this is set to 1000, which means 1000 captured frames of past tracking history will be plotted in Live mode. You can increase this to plot more history data or decrease it to zoom into the tracking history.
Show Markers
Overlays markers in the reference video when this setting is set to true.
Show Skeletons
Overlays skeletons in the reference video when this setting is set to true.
Show Rigid Bodies
Overlays rigid bodies in the reference video when this setting is set to true.
Show Distortion Grid
Displays reference camera distortion grid in the reference view.
Lock Aspect Ratio
Keeps the aspect ratio constant for reference videos.
Split Horizontal
When set to true, multiple reference view is divided into multiple columns in the reference view pane.
Maximum Exposure Display
Controls the maximum value of the exposure slider in Devices pane.
This section covers through the display options for the calibration features. In previous releases, these options existed in the Calibration pane.
Panel Output
Default: Standard. Decides whether the calibration process is displayed in a list format or a grid format. Grid format allows more camera progresses to be seen in the pane.
Status Ring
Default: Visualize. Enables the Status Indicator Ring to glow during calibration in order to display the wanding process.
Solver Visualizations
Default: Show. This toggles the display of wand samples and point cloud calibration visuals in the 2D and 3D view. If you're running on a lower end machine or graphics card with a large system, it is best to turn this feature off. The visual display will in fact eat up some computing power you may want reserved for getting quicker calibration results.
Lens Distortion
Default: Show. Use this to toggle the display of the lens distortion solution results in the 2D view. The lens distortion will be represented by a square grid that maps the distortion.
Cameras
Default: Show. This toggles the display of the cameras during the solve.
Wanding Projection
Default: Show. This toggles the display of wand samples projected in the 3D view. Turn this off if you're calibrating a very large system.
Projection Error
Default: Show. This toggles the display of error, reported as a color in the projected wand samples and markers. The wand samples will have a color between blue (good sample) and red (poor sample). Make sure the samples you collect are mostly good samples. As will all visual feedback, it may be a good idea to turn this off if you're calibrating a larger system.
Residual Error
Default: 6 mm. Set the tolerance for the reported error in the projected marker sample during calibration.
Wand Error
Default: 8 mm. This sets the tolerance for the reported error in the projected wand sample.
Sample Spacing
Default: 1. Use this to increase the spacing between displayed samples that are projected in the 3D view. Increasing this will skip more samples but will make the visual wand projections easier to see.
The Builder pane is used for creating and editing trackable models, also called trackable assets, in Motive. In general, rigid body models are created for tracking rigid objects, and skeleton models are created for tracking human motions.
When created, trackable models store the positions of markers on the target object and use the information to auto-label the reconstructed markers in 3D space. During the auto-label process, a set of predefined labels gets assigned to 3D points using the labeling algorithms, and the labeled dataset is then used for calculating the position and orientation of the corresponding rigid bodies or skeleton segments.
The trackable models can be used to auto-label the 3D capture both in Live mode (real-time) and in the Edit mode (post-processing). Each created trackable models will have its own properties which can be viewed and changed under the Properties pane. If new skeletons or rigid bodies are created during post-processing, the Take will need to be auto-labeled again in order to apply the changes to the 3D data.
On the Builder pane, you can either create a new trackable asset or modify an existing one. Select either rigid body or skeleton at the bottom of the pane, and then select whether you wish to create or edit. Each feature will be explained in the sections below.
For creating rigid bodies, select the rigid body option at the bottom and access the Create tab at the top. Here, you can create rigid body asset and track any markered-objects in the volume. In addition to standard rigid body assets, you can also create rigid body models for head-mounted displays (HMDs) and measurement probes as well.
Step 1.
Select all associated rigid body markers in the 3D viewport.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the rigid body from.
Step 3.
Click Create to define a rigid body asset from the selected markers.
Other ways to create a rigid body
You can also create a rigid body by doing the following actions while the markers are selected:
Prespective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create rigid body hotkey (Default: Ctrl +T).
Step 4.
Once the rigid body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created rigid body will be listed under the Assets pane.
If the rigid bodies, or skeletons, are created in the Edit mode, the corresponding Take needs to be auto-labeled. Only then, the rigid body markers will be labeled using the rigid body asset and positions and orientations will be computed for each frame.
This feature can be used only with HMDs that have the OptiTrack Active HMD clips mounted.
For using OptiTrack system for VR applications, it is important that the pivot point of HMD rigid body gets placed at the appropriate location, which is at the root of the nose in between the eyes. When using the HMD clips, you can utilize the HMD creation tools in the Builder pane to have Motive estimate this spot and place the pivot point accordingly. It utilizes known marker configurations on the clip to precisely positions the pivot point and sets the desired orientation.
Steps
First of all, make sure Motive is configured for tracking active markers.
Open the Builder pane under View tab and click Rigid Bodies.
Under the Type drop-down menu, select HMD. This will bring up the options for defining an HMD rigid body.
If the selected marker matches one of the Active clips, it will indicate which type of Active Clip is being used.
Under the Orientation drop-down menu, select the desired orientation of the HMD. The standard orientation used for streaming to Unity is +Z forward and Unreal Engine is +X forward, or you can also specify the expected orientation axis on the client plugin side. For use with the OpenVR driver, please set the HMD rigid body to Z-axis forward orientation.
Hold the HMD at the center of the tracking volume where all of the active markers are tracked well.
Select the 8 active markers in the 3D viewport.
Click Create. An HMD rigid body will be created from the selected markers and it will initiate the calibration process.
During calibration, slowly rotate the HMD to collect data samples in different orientations.
Once all necessary samples are collected, the calibrated HMD rigid body will be created.
For using OptiTrack system for VR applications, it is important that the pivot point of HMD rigid body gets placed at the appropriate location, which is at the root of the nose in between the eyes. When using the HMD clips, you can utilize the HMD creation tools in the Builder pane to have Motive estimate this spot and place the pivot point accordingly. It utilizes known marker configurations on the clip to precisely place the pivot point set the desired orientation.
For more information: Measurement Probe Kit Guide
Steps: Probe Calibration
Open the Builder pane under View tab and click Rigid Bodies.
Bring the probe out into the tracking volume and create a rigid body from the markers.
Under the Type drop-down menu, select Probe. This will bring up the options for defining a rigid body for the measurement probe.
Select the rigid body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining rigid body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Once the probe is calibrated successfully, a probe asset will be displayed over the rigid body in Motive, and live x/y/z position data will be displayed under the Real-time Measurement section in the Measurements pane.
Steps: Sample Collection
Under the Tools tab, open the Measurements pane.
Place the probe tip on the point that you wish to collect.
Click Take Sample on the Measurement pane.
A virtual reconstruction will be created at the point, and the corresponding information will be displayed over the measurement pane. The sampled points will also be saved in the exported onto the project directory.
Collecting additional samples will provide distance and angles between collected samples.
Sampling 3D points using the measurement probe.
Using the Builder pane, you can also modify existing rigid body assets. For editing rigid bodies, select the rigid body option at the bottom of the Builder pane and access the Edit tab at the top. This will bring up the options for editing a rigid body.
Using the Rigid Body Refinement tool for improving asset definitions.
This feature is supported in Live Mode only.
Rigid body refinement tool improves the accuracy of rigid body calculation in Motive. When a rigid body asset is initially created, Motive references only a single frame for defining the rigid body definition. The rigid body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the rigid body as well as position and orientation of the rigid body itself.
Steps
Under View tab, open the Builder pane.
Select the Rigid Bodies option at the bottom of the pane and go to the Edit tab.
In Live mode, select an existing rigid body asset that you wish to refine.
Hold the selected rigid body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the rigid body.
Press Start Refine in the Builder pane and the
Slowly rotate the rigid body to collect samples at different orientations.
Once all necessary samples are collected, the refinement results will be displayed.
The Probe Calibration feature under the rigid body edit options can be used to re-calibrate a pivot point of a measurement probe or a custom rigid body. This step is also completed as one of the calibration steps when first creating a measurement probe, but you can re-calibrate it under the Edit tab.
Steps
In Motive, select the rigid body or a measurement probe.
Bring out the probe into the tracking volume where all of its markers are well-tracked.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Click Start
Once it starts collecting the samples, slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
When sufficient samples are collected, the mean error of the calibrated pivot point will be displayed.
Click Apply to use the calibrated definition or click Cancel to calibrate again.
Options for translating and rotating the rigid body pivot point.
The Edit tab is used to apply translation or rotation to the pivot point of a selected rigid body. A pivot point of a rigid body represents both position (x, y, z) and orientation (pitch, roll, yaw) of the corresponding asset.
You can also use the Gizmo tools to quickly make modify the pivot point of a rigid body
Location
Use this tool to translate a pivot point in x/y/z axis (in mm). You can also reset the translation to set the pivot point back at the geometrical center of the rigid body.
Orientation
Use this tool to apply rotation to the local coordinate system of a selected rigid body. You can also reset the orientation to align the rigid body coordinate axis and the global axis.When resetting the orientation, the rigid body must be tracked in the scene.
The OptiTrack Clip Tool basically recalibrates HMDs with OptiTrack HMD Clips to position its pivot point at an appropriate location. The steps are basically the same as when first creating the HMD rigid body.
This feature is useful when tracking a spherical object (e.g. ball). It will assume that all of the markers on the selected rigid body are placed on a surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. Simply select a rigid body in Motive, open the Builder pane to edit rigid body definitions, and then click Apply to place the pivot point at the center of the spherical object.
To create skeletons in Motive, you need to select the skeleton option at the bottom of the Builder pane and access the Create tab at the top. Here, you select which Skeleton Marker Set to use, choose the calibration post, and create the skeleton model.
Defining skeleton from a skeleton Marker Set.
Step 1.
From the skeleton creation options on the Builder pane, select a skeleton marker set from the Marker Set drop-down menu. This will bring up a skeleton avatar displaying where the markers need to be placed on the subject.
Step 2.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best skeleton tracking. Thus, extra attention is needed when placing the skeleton markers.
The magenta markers indicate the segment markers that can be placed at a slightly different position within the same segment.
Step 3.
Double-check the marker counts and their placements. It may be easier to use the 3D viewport in Motive to do this. The system should be tracking the attached markers at this point.
Step 4.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the skeleton markers are not automatically detected, manually select the skeleton markers from the 3D perspective view.
Step 5.
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the markerset template. Or, you can also assign custom labels by loading previously prepared marker-name XML files in the label section.
Step 6.
Next step is to select the skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the skeleton. This is set to the T-pose by default.
Step 7.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created skeleton will be calibrated from it. For more details, read the calibration poses section.
Step 8.
Click Create to create the skeleton. Once the skeleton model has been defined, confirm all skeleton segments and assigned markers are located at expected locations. If any of the skeleton segment seems to be misaligned, delete and create the skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
If you are creating a skeleton in the post-processing of captured data, you will have to auto-label the Take to see the skeleton modeled and tracked in Motive.
Virtual Reality Marker Sets
Skeleton Marker Set for VR applications have slightly different setup steps. See: Rigid Body Skeleton Marker Set.
To create skeletons in Motive, you need to select the skeleton option at the bottom of the Builder pane and access the Edit tab at the top.
Existing skeleton assets can be recalibrated using the existing skeleton information. Basically, the recalibration recreates the selected skeleton using the same skeleton markerset. This feature recalibrates the skeleton asset and refreshes expected marker locations on the assets.
To recalibrate skeletons, select all of the associated skeleton markers from the perspective view along with the corresponding skeleton model. Open the Builder pane, and open the Edit tab while Skeleton option is selected at the bottom. Make sure the selected skeleton is in a calibration pose, and click Recalibrate. You can also recalibrate from the context menu in the Assets pane or in the 3D Viewport.
Skeleton recalibration do not work with skeleton templates with added markers.
Timeline Frame Range Indicator
Scrubber: Current frame.
Green: Working frame range.
Yellow: Selected frame range.
There are two different modes in Motive: Live mode and Edit mode. You can toggle between two modes from the Control Deck or by using the (~) hotkey.
Live Mode
The Live mode is mainly used when recording new Takes or when streaming a live capture. In this mode, all of the cameras are continuously capturing 2D images and reconstructing the detected reflections into 3D data in real-time.
Edit Mode
The Edit Mode is used for playback of captured Take files. In this mode, you can playback, or stream, recorded data. Also, captured Takes can be post-processed by fixing mislabeling errors or interpolating the occluded trajectories if needed.
Located on the right corner of the control deck, the status monitor can be used to monitor specific operational parameters in Motive. Click on up/down arrows to switch the displayed status. You can also click on the status monitor to open a popup for displaying all available status.
The following status parameters will be available:
Residual
Data
Current incoming data transfer rate (KB/s) for all attached cameras.
Point Cloud
Measured latency of the point cloud reconstruction engine.
Rigid Body
Measured latency of the rigid body solver.
Skeleton
Measured latency of the skeleton solver.
Software
Measured software latency. It represents the amount of time it takes Motive to process each frame of captured data. This includes the time taken for reconstructing the 2D data into 3D data, labeling and modeling the trackable assets, displaying in the viewport, and other processes configured in Motive.
System
Available only on Ethernet Camera systems (Prime series and Slim13E). Measured total system latency. This is the time measured from the middle of the camera exposures to when Motive has fully solved all of the tracking data.
Streaming
The rate at which the tracking data is streamed to connected client applications.
Cameras
Available only on Ethernet Camera systems (Prime series or Slim 13E). Average temperature, in Celsius, on the imager boards of the cameras in the system.
In Motive, the Edit Tools pane can be accessed under the or by clicking icon on the main toolbar.
The Edit Tools pane contains the functionality to modify 3D data. Four main functions exist: trimming trials, filling gaps, smoothing trajectories and swapping data points. Trimming trials refers to the clearing of data points before and after a gap. Filling gaps is the process of filling in a markers trajectory for each frame that has no data. Smoothing trajectories filters out unwanted noise in the signal. Swapping allows two markers to swap their trajectories.
Read through the page to learn about utilizing the edit tools.
Trim on Selected
Trim on Selected trims selected trajectories within the selected time region. Gaps outside the selected time region are not trimmed. Trajectories that are not selected are untouched.
Trim on All
Trim on All trims all trajectories within the selected time region. Gaps outside the selected time region are not trimmed.
Leading
Default: 3 frames. The Trim Size Leading defines how many data points will be deleted before a gap.
Trailing
Default: 3 frames. The Trim Size Trailing defines how many data points will be deleted after a gap.
Smart Trim
Default: OFF. The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points in case jitters are introduced from the occlusion. When enabled, this feature will determine whether each end of the gap is suspicious with errors, and delete an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Minimum segment size
Default: 5 frames. The Minimum Segment Size determines the minimum number of frames required by a trajectory to be modified by the trimming feature. For instance, if a trajectory is continuous only for a number of frames less than the defined minimum segment size, this segment will not be trimmed. Use this setting to define the smallest trajectory that gets.
Gap size threshold
Default: 2 frames. The Gap Size Threshold defines the minimum size of a gap that is affected by trimming. Any gaps that are smaller than this value are untouched by the trim feature. Use this to limit trimming to only the larger gaps. In general it is best to keep this at or above the default, as trimming is only effective on larger trajectories.
Find Previous
Find Previous searches through the selected trajectory and highlights the range and moves the cursor to the center of a gap before the current frame.
Find Next
Find Next searches through the selected trajectory and highlights the range and moves the cursor to the center of a gap after the current frame.
Fill Selected
Fills the currently selected gap.
Fill All
Fills all gaps in the currently selected track.
Fill Everything
Fills all gaps in all tracks of the timeline.
Max Gap Size
The maximum size, in frames, that a gap can be for Motive to fill. Raising this will allow larger gaps to be filled. However, larger gaps may be more prone to incorrect interpolation.
Interpolation
Fill Target
Smooth Selection
Applies smoothing to the selected portion of the track.
Smooth Track
Applies smoothing to all frames of the track.
Smooth All Tracks
Applies smoothing to all frames on all tracks of the current selection in the timeline.
Max. Freq (Hz)
Determines how strongly your data will be smoothed. The lower the setting, the more smoothed the data will be. High frequencies are present during sharp transitions in the data, such as foot-plants, but can also be introduced by noise in the data. Commonly used ranges for Filter Cutoff Frequency are 7-12 Hz, but you may want to adjust that up for fast, sharp motions to avoid softening transitions in the motion that need to stay sharp.
Find Previous
Jumps to the most recent detected marker swap.
Find Next
Jumps to the next detected marker swap.
Markers to Swap
Selects the markers to be swapped.
Apply Swap
Swaps two markers selected in the Markers to Swap
In Motive, the Data Streaming pane can be accessed under the or by clicking icon on the main toolbar.
For explanations on the streaming workflow, read through the page.
Advanced Settings
The Data Streaming Pane contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Data Streaming pane in Motive
The OptiTrack Streaming Engine allows you to stream tracking data via Motive's free streaming plugins or any custom built NatNet interfaces. To begin streaming, select Broadcast Frame Data. Select which types of data (e.g. markers, rigid bodies, or skeletons) will be streamed, noting that some third party applications will only accept one type of data. Before you begin streaming, ensure that the network type and interface are consistent with the network you will be streaming over and the settings in the client application.
Broadcast Frame Data
(Default: False) Enables/disables broadcasting, or live-streaming, of the frame data. This must be set to true in order to start the streaming.
Local Interface
(Default: loopback) Sets the network address which the captured frame data is streamed to. When set to local loopback (127.0.0.1) address, the data is streamed locally within the computer. When set to a specific network IP address under the dropdown menu, the data is streamed over the network and other computers that are on the same network can receive the data.
Labeled Markers
(Default: True) Enables, or disables, streaming of labeled Marker data. These markers are point cloud solved markers.
Unlabeled Markers
(Default: True) Enables/disables streaming of all of the unlabeled Marker data in the frame.
Asset Markers
(Default: True) Enables/disables streaming of the Marker Set markers, which are named collections of all of the labeled markers and their positions (X, Y, Z). In other words, this includes markers that are associated with any of the assets (Marker Set, Rigid Body, Skeleton). The streamed list also contains a special marker set named all which is a list of labeled markers in all of the assets in a Take. In this data, skeleton and rigid body markers are point cloud solved and model-filled on occluded frames.
Rigid Bodies
Skeletons
(Default: Skeletons) Enables/disables streaming of skeleton tracking data from active skeleton assets. This includes the total number of bones and their positions and orientations in respect to global, or local, coordinate system.
Skeleton Coordinates
(Default: Global) When set to Global, the tracking data will be represented according to the global coordinate system. When this is set to Local, the streamed tracking data (position and rotation) of each skeletal bone will be relative to its parent bones.
Skeleton as Rigid Bodies
[Advanced] (Default: False) When set to true, skeleton assets are streamed as a series of rigid bodies that represent respective skeleton segments.
Bone Naming Convention
(Default: FBX) Sets the bone naming convention of the streamed data. Available conventions include Motive, FBX, and BVH. The naming convention must match the format used in the streaming destination.
The default setting for this has been changed to FBX in Motive 2.0.
Up Axis
(Default: Y Axis) Selects the upward axis of the right-hand coordinate system in the streamed data. When streaming onto an external platform with a Z-up right-handed coordinate system (e.g. biomechanics applications) change this to Z Up. When set to Z-up, the global axis will rotate -90 degrees along the x-axis.
Remote Trigger
Type
(Default: Multicast) Selects the mode of broadcast for NatNet. Valid options are: Multicast, Unicast.
Stream Subject Prefix
[Advanced] (Default: True) When set to true, associated asset name is added as a subject prefix to each marker label in the streamed data.
Stream Visual3D Compatible
[Advanced] Enables streaming to Visual3D. Normal streaming configurations may be not compatible with Visual3D, and this feature must be enabled for streaming tracking data to Visual3D.
Scale
[Advanced] Applies scaling to all of the streamed position data.
Command Port
[Advanced] (Default: 1510) Specifies the port to be used for negotiating the connection between the NatNet server and client.
Data Port
[Advanced] (Default: 1511) Specifies the port to be used for streaming data from the NatNet server to the client(s).
Multicast interface
[Advanced] Specifies the multicast broadcast address. (Default: 239.255.42.99). Note: When streaming to clients based on NatNet 2.0 or below, the default multicast address should be changed to 224.0.0.1 and the data port should be changed to 1001.
Multicast as Broadcast
[Advanced] Warning: This mode is for testing purposes only and it can overflood the network with the streamed data. When enabled, Motive streams out the mocap data via broadcasting instead of sending to Unicast or Multicast IP addresses. This should be used only when the use of Multicast or Unicast is not applicable. This will basically spam the network that Motive is streaming to with streamed mocap data which may interfere with other data on the network, so a dedicated NatNet streaming network may need to be set up between the server and the client(s).To use the broadcast set the streaming option to Multicast and have this setting enabled on the server. Once it starts streaming, set the NatNet client to connect as Multicast, and then set the multicast address to 255.255.255.255. Once Motive starts broadcasting the data, the client will receive broadcast packets from the server.
TrackD Streaming Engine
(Default: False) Streams rigid body data via the Trackd protocol.
VRPN Streaming Engine
(Default: False) Streams rigid body data via the VRPN protocol.
VRPN Broadcast Port
[Advanced] (Default: 3883) Specifies the broadcast port for VRPN streaming. (Default: 3883).
The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
In Motive, all of the connected devices get listed in the Devices pane, including tracking cameras, synchronization hubs, color reference cameras, and other supported peripheral devices such as force plates and data acquisition devices. Using this pane, core settings of each component can be adjusted; which includes sampling rates and camera exposures. Cameras can be grouped to control the system more quickly. You can also select individual devices to view and modify their properties in the . Lastly, when specific devices are selected in this pane, their respective properties will get listed under , where you can also make changes to the settings.
At the very top of the devices pane, the master camera system frame rate is indicated. All synchronized devices will be capturing at a whole multiple or a whole divisor of this master rate.
The master camera frame rate is indicated at top of the Devices pane. This rate sets the framerate which drives all of the tracking cameras. If you wish to change this, you can simply click on the rate to open the drop-down menu and set the desired rate.
eSync2 users: If you are using the eSync2 synchronization hub to synchronize the camera system to another signal (e.g. Internal Clock), you can apply multiplier/divisor to the input signal to adjust the camera system frame rate.
By clicking on the down-arrow button under the camera frame rate, you can expand list of grouped devices. At first, you may not have any grouped devices. To create new groups, you can select multiple devices that are listed under this panel, right-click to bring up the context menu, and create a new group. Grouping the cameras allows easier control over multiple devices in the system.
Under the tracking cameras section, it lists out all of the motion capture cameras connected to the system. Here, you can configure and control the cameras. You can right-click on the camera setting headers to show/hide specific camera settings and drag them around to change the order. When you have multiple cameras selected, making changes to the settings will modify them for all of the selected cameras. You can also group the cameras to easily select and change the settings quickly. The configurable options include:
Framerate multiplier
Exposure length (microseconds)
IR LED ring on/off
Real-time reconstruction contribution
Imager Gain
IR Filter on/off
Sets the amount of time that the camera exposes per frame. The minimum and maximum values will depend on both the type of camera and the frame rate. Higher exposure will allow more light in, creating a brighter image that can increase visibility for small and dim markers. However, setting exposure too high can introduce false reflections, larger marker blooms, and marker blurring--all of which can negatively impact marker data quality.
Exposure value is measured in scanlines for V100 and V120 series cameras, and in microseconds for Flex13, S250e and PrimeX Series cameras.
This setting enables or disables illumination of the LEDs on the camera IR LED ring. In certain applications, you may want to disable this setting to stop the IR LEDs from strobing. For example, when tracking active IR LED markers, there is no need for the cameras to emit IR lights, so you may want to disable this to stop the IR illuminations which may introduce additional noise in the data.
The IR intensity setting is now a on/off setting. Please adjust the exposure setting to adjust the brightness of the image in the IR spectrum.
In most applications, you can have all of the cameras contributing to the 3D reconstruction engine without any problem. But for a very high-camera count systems, having all camera to contribute to the reconstruction engine can slow down the real-time processing of point cloud solve and result in dropped frames. In this case, you can have a few cameras disabled from real-time reconstruction to prevent frame drops and use the collected 2D data later in post-processing.
Increasing a camera’s gain will brighten the image, which can improve tracking range at very long distances. Higher gain levels can introduce noise into the 2D camera image, so gain should only be used to increase range in large setup areas, when increasing exposure and decreasing lens f-stop does not sufficiently brighten up the captured image.
Sets the camera to view either visible or infrared light on cameras equipped with a Filter Switcher. Infrared Spectrum should be selected when the camera is being used for marker tracking applications. Visible Spectrum can optionally be selected for full frame video applications, where external, visible spectrum lighting will be used to illuminate the environment instead of the camera’s IR LEDs. Common applications include reference video and external calibration methods that use images projected in the visible spectrum.
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum frame rate allowed by the network bandwidth will vary.
Bit-rate setting determines the transmission rate outputted from the selected color camera. This is how you can control the data output from color cameras to avoid overloading the camera network bandwidth. At a higher bit-rate setting, more amount of data is outputted and the image quality is better since there is less amount of image compression being done. However, if there is too much data output, it may overload the network bandwidth and result in frame drops. Thus, it is best to minimize this while keeping the image quality at a acceptable level.
Detected force plates and NI-DAQ devices will get listed under the Devices pane as well. You can apply multipliers to the sampling rate if the they are synchronized through trigger. If they are synchronized via a reference clock signal (e.g. Internal Clock), their sampling rate will be fixed to the rate of that signal.
The Data pane is used for managing the Take files. This pane can be accessed under the in Motive or by clicking the icon on the main toolbar.
Action Items
What happened to the Project TTP Files?
The TTP project file format is deprecated starting from the 2.0 release. Now, recorded Takes will be managed by simply loading the session folders directly onto the new Data pane. For exporting and importing the software setting configurations, the Motive profile file format will replace the previous role of the TTP file. In the Motive profile, software configurations such as reconstruction settings, application settings, data streaming settings, and many other settings will be contained. Camera calibration will no longer be saved in TTP files, but they will be saved in the calibration file (CAL) only. TTP files can still be loaded in Motive 2.0. However, we suggest moving away from using TTP files.
Set the selected session as the current session.
Rename the session folder.
This creates a folder under the selected directory.
Opens the session folder from the file explorer
Delete the session folder. All of its contents will be deleted as well.
When a session folder is selected, associated Take files and their descriptions are listed in a table format on the right-hand side of the Data pane. For each Take, general descriptions and basic information are shown in the columns of the respective row. To view additional descriptions, click on the pane menu, select the Advanced option, and all of the descriptions will be listed. For each of the enabled columns, you can click on the arrow next to it to sort up/down the list of Takes depending on the category.
A search bar is located at the bottom of the Data pane, and you can search a selected session folder using any number of keywords and search filters. Motive will use the text in the input field to list out the matching Takes from the selected session folder. Unless otherwise specified, the search filter will scope to all of the columns.
Search for exact phrase
Wrap your search text in quotation marks.
e.g. Search "shooting a gun"
for searching a file named Shooting a Gun.tak.
Search specific fields
To limit the search to specific columns, type field:
, plus the name of a column enclosed with quotation marks, and then the value or term you're searching for.
Multiple fields and/or values may be specified in any order.
e.g. field:"name" Lizzy
, field:"notes" Static capture
.
Search for true/false values
To search specific binary states from the Take list, type the name of the field followed by a colon (:), and then enter either true ([t], [true], [yes], [y]) or false ([f], [false], [no], [n]).
e.g. Best:[true]
, Solved:[false]
, Video:[T]
, Analog:[yes]
The table layout can also be customized. To do so, go to the pane menu and select New or any of the previously customized layouts. Once you are in a customizable layout, right-click on the top header bar and add or remove categories from the table.
A list of take names can be imported from either a CSV file or carriage return texts that contain a take name on each line. Using this feature, you can plan, organize, and create a list of capture names ahead of actual recording. Once take names have been imported, a list of empty takes with the corresponding names will be listed for the selected session folder.
From a Text
Take lists can be imported by copying a list of take names and pasting them onto the Data pane. Take names must be separated by carriage returns; in other words, each take name must be in a new line.
From a CSV File
Saves the selected take
Reverts any changes that were made. This does not work on the currently opened Take.
Selects the current take and loads it for playback or editing.
Allows the current take to be renamed.
Opens an explorer window to the current asset path. This can be helpful when backing up, transferring, or exporting data.
Separate reconstruction pipeline without the auto-labeling process. Reconstructs 3D data using the 2D data.
Separate auto-labeling pipeline that labels markers using the existing tracking asset definitions. Available only when 3D data is reconstructed for the Take.
Combines 2D data from each camera in the system to create a usable 3D take. It also incorporates assets in the Take to auto-label and create rigid bodies and skeletons in the Take. Reconstruction is required in order to edit or export the skeleton or the rigid body data in the Take.
Solves 6 DoF tracking data of skeletons and rigid bodies and bakes them into the TAK recording. When the assets are solved, Motive reads from recorded Solve instead of processing the tracking data in real-time.
Performs all three reconstruct, auto-label, and solve pipelines in consecutive order. This basically recreates 3D data from recorded 2D camera data.
Opens the Export dialog window to select and initiate file export. Valid formats for export are CSV, C3D, FBX, BVH.
Opens the export dialog window to initiate scene video export to AVI.
Exports an audio file when selected Take contains audio data.
Permanently deletes the 3D data from the take. This option is useful in the event reconstruction or editing causes damage to the data.
Unlabels all existing marker labels in 3D data. If you wish to re-auto-label markers using modified asset definitions, you will need to first unlabel markers for respective assets.
Deletes 6 DoF tracking data that was solved for skeleton and rigid bodies. If Solved data doesn't exist, Motive instead calculates tracking of the objects from recorded 3D data in real-time.
Archives the original take file and creates a duplicate version, minus any 3D data. Archiving a take will reduce size of the active take file while preserving the 2D camera data in a backup sub-directory for later use, if necessary.
Opens a dialog box to confirm permanent deletion of the take and all associated 2D, 3D, and Joint Angle Data from the computer. This option cannot be undone.
Deletes all assets that were recorded in the take.
Copies the assets from the current capture to the selected Takes.
The Graph View pane is used to visualize the tracking data in Motive. This pane can be accessed from the command bar (View tab > Graph) or simply by clicking on the icon. This page provides instructions and tips on how to efficiently utilize the Graph View pane in Motive.
Using the Graph View pane, you can visualize and monitor multiple data channels including 3D positions of reconstructed markers, 6 Degrees of Freedom (6 DoF) data of trackable assets, and signals from integrated external devices (e.g. force plates or NI-DAQ). Graph View pane offers a variety of graph layouts for the most effective data visualization. In addition to the basic layouts (channel, combined, gapped), custom layouts can also be created for monitoring specific data channels only. Up to 9 graphs can be plotted in each layout and up to two panes can be opened simultaneously in Motive.
Graphs can be plotted in both Live and Edit mode.
In Live Mode, the following data can be plotted in real-time:
Rigid body 6 DoF data (Position and Orientation)
Force Plate Data (Force and Moment)
Analog Data
In Edit Mode, the graphs can be used to review and post-process the captured data:
3D Positions of reconstructed markers
Rigid body 6 DoF data (Position and Orientation)
Force Plate Data (Force and Moment)
Analog Data
Creates a new graph layout.
Deletes current graph layout.
Saves the changes to the graph layout XML file.
Takes an XML snapshot of the current graph layout. Once a layout has been particularized, both the layout configuration and the item selection will be fixed and it can be exported and imported onto different sessions.
Opens the layout XML file of the current graph layout for editing.
Opens the file location of where the XML files for the graph layouts are stored.
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Scroll-click and drag to pan the view range range.
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range, recorded tracking data will be played back and shown on the graphs. This range can also be used to output a specific frame ranges when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both Graph View pane as well as Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the amount of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
Left-click and drag on the graph to select a specific frame range. Frame range selection can be utilized for the following workflows:
Tracking Data Export: Exporting tracking data for selected frame ranges.
Reconstruction: Performing the post-processing reconstruction (Reconstructing / Reconstruct and Auto-labeling) pipeline on selected frame ranges.
Labeling: Assigning marker labels, modifying marker labels, or running the auto-label pipeline on selected ranges only.
Data Deleting: Deleting 3D data or marker labels on selected ranges.
The layouts feature in the Graphs View pane allows users to organize and format graph(s) to their preference. The graph layout is selected under the drop-down menu located at top-right corner of the Graph View pane.
In addition to default graph layouts (channels view, combined view, and tracks view) which have been migrated from the previous versions of Motive, custom layouts can also be created. With custom layouts, users can specify which data channels to plot on each graph, and up to 9 graphs can be configured on each layout. Furthermore, asset selections can be locked to labeled markers or assets.
Layouts under the System Layouts category are the same graphs that existed in the old timeline editor.
The Channel View provides X/Y/Z curves for each selected marker, providing verbose motion data that highlights gaps, spikes, or other types of noise in the data.
The Combined View provides X/Y/Z curves for each selected markers at same plot. This mode is useful for monitoring positions changes without having to translate or rescale the y-axis of the graph.
Graph layout customization is further explained on the later section: Customizing Layout.
Right-click on the graph, go to the Grid Layout, and choose the number of rows and columns that you wish to put in the grid. (max 9 x 9)
Click on a graph from the grid. The graph will be highlighted in yellow. Within the grid, only the selected graph will be edited when making changes using the Graph Editor.
Next, you need to pick data channels that you wish to plot. You can do this by checking the desired channels under the data tab while a graph is selected. Only the checked channels will be plotted on the selected graph. Here, you can also specify which color to use when plotting corresponding data channels.
Then under the Visual tab, format the style of the graph. You can configure the graph axis, assign name for the graph, display values, and etc. Most importantly, configure the View Style to match desired graph format.
Repeat the above steps 5 ~ 6 and configure each of the graphs in the layout.
Select an asset (marker, Rigid Body, Skeleton, force plate, or NI-DAQ channel) that you wish to monitor.
Once all related graphs are locked, move onto next selection and lock the corresponding graphs.
When you have the layout configured with the locked selections you can save the configurations as well as the implicit selections temporarily to the layout. Until the layout is particularized onto the explicit selections, you will need to select the related items in Motive to plot the respective graphs.
It is important to particularize the customized layout once all of the graphs are configured. This action will save and explicitly fix the locked selections that the graphs are locked onto. Once the layouts have been particularized, you can re-open the same layout on different sessions and plot the data channels from the same subject with out locking the selection again. Specifically, the particularized layout will try to look for items (labeled marker, Rigid Body, Skeleton, force plate, or analog channels) with the same names that the layout is particularized onto.
Only enabled, or checked, data channels will be plotted on the selected graph using the specified color. Once channels are enabled, an asset (marker, Rigid Body, Skeleton, force plate, or DAQ channel) must be selected and locked.
Plot 3D position (X/Y/Z) data of selected, or locked, marker(s) onto the selected graph.
Plot pivot point position (X/Y/Z), rotation (pitch/yaw/roll), or mean error values of selected, or locked, Rigid Body asset(s) onto the selected graph.
Plot analog data of selected analog channel(s) from a data acquisition (NI-DAQ) device onto the selected graph.
Plot force and moment (X/Y/Z) of selected force plate(s). Plotted graph respects coordinate system of the force platforms (z-up).
Labels the selected graph.
Configures the style of the selected graph:
Channel: Plots selected channels onto the graph.
Combined: Plots X/Y/Z curves for each selected markers fixed on the same plot.
Gap: The Tracks View style allows you to easily monitor the occluded gaps on selected markers.
Live: The Live mode is used for plotting the live data.
Enables/disables range handles that are located at the bottom of the frame selection.
Sets the height of the selected row in the layout. The height will be determined by a ratio to a sum of all stretch values: (row stretch value for the selected row)/(sum of row stretch values from all rows) * (size of the pane)
.
Sets the width of the selected column in the layout. The width size will be determined by a ratio to a sum of all values: (column stretch value for the selected column)/(sum of column stretch values from all columns) * (size of the pane)
.
Display current frame values for each data set.
Display name of each plotted data set.
Plots data from the primary selection only. The primary selection is the last item selected from Motive.
Shows/hides x grid-lines.
Shows/hides y grid-lines.
Sets the size of the major grid lines, or tick marks, on the y-axis values.
Sets the size of the minor grid lines, or tick marks, on the y-axis values.
Sets the minimum value for the y-axis on the graph.
Sets the maximum value for the y-axis on the graph.
Function | Default Control |
---|---|
In the Assets pane, the context menu for involved assets can be accessed by clicking on the or by right-clicking on a selected Take(s). The context menu lists out available actions for the corresponding assets.
Column Header | Description |
---|---|
Assets pane: While the markers are selected in Motive, click on the add button in the Assets pane.
Tip: Prime series cameras will illuminate in blue when in live mode, in green when recording, and turned-off in edit mode. See more at
Average of values of all live-reconstructed 3D points. This is available only in the or in the .
Important software notifications will be reported at the right corner of the control deck. Click on the to view the message. Only the important configuration notification will be reported here. Software status messages are reported on the pane.
Sets which interpolation method to be used. Available patterns are constant, linear, cubic, pattern-based, and model-based. For more information, read page
When using the interpolation to fill gaps on a marker's the trajectory, Other reference markers are selected alongside the target marker to interpolate. This Fill Target drop-down menu specifies which marker among the selected markers to set as the target marker to perform the pattern-base interpolation.
(Default: True) Enables/disables streaming of rigid body data, which includes the name of rigid body assets as well as positions and orientations of their .
(Default: False) Allows using the remote trigger for recording using XML commands. See more:
For information on streaming data via the Streaming Engine, please consult the Trackd documentation or contact Mechdyne. Note that only 6 DOF rigid body data can be streamed via Trackd.
For information on streaming data via the VRPN Streaming Engine, please visit the . Note that only 6 DOF rigid body data can be streamed via VRPN.
Reference cameras using MJPEG grayscale video mode, or cameras, can capture either at a same frame rate as the other tracking cameras or at a whole fraction of the master frame rate. In many applications, capturing at a lower frame rate is better for reference cameras because it reduces the amount of data recorded/outputted decreasing the size of the capture files overall. This can be adjusted by configuring the setting.
The multiplier setting applies selected multiplier to the master sampling rate. Multipliers cannot be applied to the tracking cameras, but you can apply them to the reference cameras that are capturing in processing mode. This allows the reference cameras to capture at a slower framerate. This reduces the number of frames captured by the reference camera which reduces the overall data size.
The mode setting indicate which that the cameras are set to. You can click on the icons to toggle between the tracking mode and the reference grayscale mode. Available video modes may be slightly different for different camera types, but available types include:
Object mode (tracking)
Precision mode (tracking)
MJPEG compressed grayscale mode (reference)
Ray grayscale mode (reference)
This enables/disables contribution of respective cameras to the of the 3D data. When cameras are disabled from contributing to the reconstruction, the cameras will still be collecting capture data but they will not be processed through the real-time reconstruction. Please note that 2D frames will still get recorded into the capture file, and you can run post-processing reconstruction pipeline to obtain fully contributed 3D data in the Edit mode.
reference cameras will also get listed under the devices pane. Just like other cameras in the Tracking group, you can configure the camera settings, including the sampling rate multiplier to decrease the sampling rate of the camera. Additionally, captured and the data transfer can be configured.
Detected synchronization hubs will also get listed under the devices pane. You can select the synchronization hubs in the Devices pane, and configure its input and output signals through the . For more information on this, please read through the page.
For more information, please read through the force plate setup pages (, , ) or the setup page.
Option | Description |
---|
The left-hand section of the Data pane is used to list out the sessions that are loaded in Motive. Session folders group multiple associated Take files in Motive, and they can be imported simply by dragging-and-dropping or importing a folder into the data management pane. When a session folder is loaded, all of the Take files within the folder are loaded altogether.
In the list of session folders, a currently loaded session folder will be denoted with a flag symbol , and a selected session folder will be highlighted in white.
: Add a new session folder .
: Remove a session folder from the list or delete permanently.
:Collapse the session folder sidebar.
Category | Description |
---|
Take lists can be imported from a CSV file that contains take names on each row. To import, click on the top-right menu icon () and select Import Shot List.
In the Data pane, context menu for captured Takes can be brought up by clicking on the icon or by right-clicking on a selected Take(s). The context menu lists out the options which can be used to perform corresponding pipelines on the selected Take(s). The menu contains a lot of essential pipelines such as reconstruction, auto-label, data export and many others. Available options are listed below.
Opens the Delete 2D Data pop-up where you can select to delete the 2D data, Audio data, or reference video data. Read more in .
Icon | Name | Description |
---|
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.
Start and end frames of a working range can also be set from the when in the Edit mode.
Zooming: Zoom quickly into the selected range by clicking on the button or by using the F hotkey.
Post-processing data editing: Applying the editing tools on selected frame ranges only. Read more:
The Tracks View is a simplified view that can reveal gaps, marker swaps, and other basic labeling issues that can be quickly remedied by merging multiple marker trajectories together. You can select a specific group of markers from the drop down menu. When two markers are selected, labels can be merged by using and .
In the new Graphs View pane, the graph layout can be customized to monitor data from channels involved in a capture. Create a new layout from the menu > Create New Layout option or right-click on the pane and click Create New Layout option.
New layouts can be created by clicking on the Create Graph Layout from the pane menu located on the top-right corner.
Expand the Graph Editor by clicking on the icon on the tool bar.
When plotting live tracking data in the Live Mode, set the View Style to Live. Frame range of the Live mode graphs can be adjusted by changing the under application settings.
Lock selection for graphs that needs to be linked to the selection. Individual graphs can be locked from the context menu (right-clicking on the graph > Lock Selection) or all graphs can be locked by clicking on the toolbar.
The last step is to make the selection explicit by particularizing the layout. You can do this by clicking the Particularize option under the pane menu once the layout is configured and desired selections are locked. This will fix the explicit selection onto the layout XML file, and the layout will always look for specific items with the same name from the Take. Particularized graphs will be indicated by at the top-right corner of the graph.
The Graph Editor can be expanded by clicking on the Icon from the toolbar. When this sidebar is expanded, you can select individual graphs but other navigation controls will be disabled. Using the graph editor, you can select a graph, choose which data channels to plot, and format the overall look to suit your need.
Using the black color (0,0,0) for the plots will set the graph color to the color of the Rigid Body asset shown in the 3D viewport; which is set under the .
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle Selection in View
CTRL + left mouse click
Cam
This column shows the camera number associated with the row of data, the wanding result or the average result of the camera group. The wanding has error and is reported as the deviation in the wand markers across all samples.
Samp
The number of samples utilized at the current stage of the solution. This number can climb as the solution converges.
Quality
The quality given to the current pixel error. You will see the quality increase as the pixel error drops. Quality ranges in the progress bar. Red is poor, yellow is good, and green is excellent.
Focal
This is the calculated or given focal length of the camera. Doesn't apply to the average or the wanding.
PixErr
The average pixel error of the camera. Represent the 2 dimensional error of the camera's ability to locate a marker.
Simple | Use a simplest data management layout. |
Advanced | Additional column headers are added to the layout. |
Classic | Use the classic Motive layout where Take name, availability of 2D data and 3D data is listed. |
New... | Create a new customizable layout. |
Rename | Rename a custom layout. |
Delete | Delete a custom layout. |
2D Mode |
Import Shot List... | Import a list of empty Take names from a CSV file. This is helpful when you plan a list of shots in advance to the capture. |
Export Take Info... | Exports a list of Take informations into an XML file. Included elements are name of the session, name of the take, file directory, involved assets, notes, time range, duration, and number of frames included. |
Best |
Health |
Progress | The progress indicator can be used to track the process of the Takes. Use the indicators to track down the workflow specific progress of the Takes.
|
Name | Shows the name of the Take. |
2D |
3D |
Video |
Solved |
Audio |
Analog |
Data Recorded | Shows the time and the date when the Take was recorded. |
Frame Rate | Shows the camera system frame rate which the Take was recorded in. |
Duration | Time length of the Take. |
Total Frames | Total number of captured frames in the Take. |
Notes | Section for adding commenting on each Take. |
Start Timecode |
Graph Editor | This opens up the sidebar for customizing a selected graph within a layout. |
Autoscale Graph | Toggle to autoscale X/Y/Z graphs |
Zoom Fit (selected range) | Zooms into selected frame region and centers the timeline accordingly |
Lock Cursor Centered | Locks the timeline scrubber at the center of the view range. |
Delete Selected Keys | Delete selected frame region. |
Move Selected Keys | Translates trajectories in selected frame region. Select a range and drag up and down on a trajectory. |
Draw Keys | Manual draw trajectory by clicking and dragging on a selected trajectory in the Editor. |
Merge Keys Up |
Merge Keys Down |
Lock Selection | Locks the current selection (marker, rigid body, skeleton, force plates, or NI-DAQ) onto all graphs on the layout. They are used to temporarily hold the selections. Locked selections can later be fixed by taking a snapshot of the layout. This is elaborated more in the later section. |
The Info pane in Motive displays real-time tracking information of a rigid body selected in Motive. This pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar. This pane lists out real-time tracking information for a selected rigid body in Motive. Reported data includes a total number of tracked rigid body markers, mean errors for each of them, and the 6 Degree of Freedom (position and orientation) tracking data for the rigid body.
Euler Angles
There are many potential combinations of Euler angles so it is important to understand the order in which rotations are applied, the handedness of the coordinate system, and the axis (positive or negative) that each rotation is applied about. The following conventions are used for representing Euler orientation in Motive:
Rotation order: XYZ
All coordinates are *right-handed*
Pitch is degrees about the X axis
Yaw is degrees about the Y axis
Roll is degrees about the Z axis
Position values are in millimeters
This page goes over the features available on the Markersets pane. Markersets pane can be accessed by clicking on the icon on the toolbar.
The Marker Set is a type of assets in Motive. It is the most fundamental method of grouping related markers, and this can be used to manually label individual markers in post-processing of captured data using the Labeling pane. Note that Marker Sets are used for manual labeling only. For automatic labeling during live mode, a Rigid Body asset or a Skeleton asset is necessary.
Since creating rigid bodies, or skeletons, groups the markers in each set and automatically labels them, Marker Sets are not commonly used in the processing workflow. However, they are still useful for marker-specific tracking applications or when the marker labeling is done in pipelines other than auto-labeling. Also, marker sets are useful when organizing and reassigning the labels.
Move to Top
Move the selected label to the top of the labels list.
Move Up
Move the selected label one level higher on the list.
Move Down
Move the selected label one level lower on the list.
Move to Bottom
Move the selected label to the bottom of the labels list.
Rename
Rename the selected label
Delete
Delete the seleted label
The Accuracy Tool is used to check calibration quality and tracking accuracy of a given volume. There are two tools in this tab: the Volume Accuracy tool and the Marker Measurement tool. The Accuracy tools are available under the Accuracy Tools tab in the Measurements pane.
This tool works only with a fully calibrated capture volume and requires the calibration wand that was used during the process. It compares the length of the captured calibration wand to its known theoretical length and computes the percent error of the tracking volume. You can analyze the tracking accuracy from this.
In Live mode, open the Measurements pane under the Tools tab.
Access the Accuracy tools tab.
Under the Wand Measurement section, it will indicate the wand that was used for the volume calibration and its expected length (theoretical value). The wand length is specified under the Calibration pane where you can set the theoretical length to the desired value.
Bring the calibration wand into the volume.
Once the wand is in the volume, detected wand length (observed value) and the calculated wand error will be displayed accordingly.
This tool calculates measured displacement of a selected marker. You can use this tool to compare the calculated displacement in Motive against how much the marker has actually moved to check the tracking accuracy of the system.
Place a marker inside the capture volume.
Select the marker in Motive.
Under the Marker Measurement section, press Reset Measurement. This zeroes the position of the marker.
Slowly translate the marker, and the absolute displacement will be displayed.
The Probe tab is used with the Measurement Probe Tool Kit. See Measurement Probe Kit Guide for specific instructions.
Calibration Frames
Number of frames to sample when calibrating the measurement probe.
Start Calibration
Initiates the probe calibration process.
Sample Frames
A total number of frames to collect for calculating the 3D location of the probe tip in respect to the markers on the probe.
Take Sample
Samples 3D location of the probe tip.
Clear All
Clears all collected probe samples.
Set Origin
Re-positions the global coordinate origin to where the probe tip is located
Set Orientation
Re-orients the global coordinate system to three of the sampled points.
Sound
Enable/disable the beeping sound each time a sample is collected.
This section of the pane displays tracking information of the probe. It displays both the real-time tracking and recorded samples. If multiple samples are collected, the distance and angle between the sampled points will be calculated and displayed.
This page includes detailed step-by-step instructions on customizing the marker name XML files for skeletons and Marker Set assets.
In order to customize the skeleton marker labels, marker colors, and marker sticks, a Marker XML file needs be exported, customized, and loaded back in. For skeletons, modified Marker XML files can only be used with the same Marker Set template. In other words, if you exported a Baseline (41) skeleton and modified the labeling XML file, same Baseline (41) Marker Set needs to be created in order to import the customized XML file. The following section describes the steps for customizing skeleton XML templates.
a) First, choose a Marker Set from the Builder pane, and create a skeleton.
b) Right-click on a skeleton asset in the Assets pane, and select Export Markers.
c) In the export dialog window, select a directory to save the Marker Name Template (.xml) file. Click Save to export.
Customize Marker Labels
a) Open the exported XML file using a text editor. It will contain corresponding marker label information under the MarkerNameMap section.
b) Customize the marker labels from the XML file. Under the MakerNames section of the XML, modify labels for the name variables with the desired name, but do not change labels for oldName variables. The order of the markers should remain the same.
c) If you changed marker labels, the corresponding marker names must also be renamed within the Marker and Marker Sticks definitions as well. Otherwise, the marker colors and marker sticks will not be defined properly.
Customize Marker Sticks and Colors
a) To customize the Marker Colors and Sticks, open the exported XML file using a text editor and scroll down to the Markers and Marker Sticks section. If the Markers and Marker Sticks section does not exist in the exported XML file, you could be using an old skeleton created before Motive 1.10. Updating and exporting old skeleton will provide these sections in the XML.
b) Here, you can customize the marker colors and the marker sticks. For each marker name, you must use exactly same marker labels that were defined by the MarkerNames section of the same XML file. If any marker label was changed in the MarkerNames section, the changed name must be reflected in the respective colors and sticks definitions as well. In other words, if a Custom_Name was assigned under name for a label in the MarkerNameMap section <Marker name="Custom_Name" oldName="Original_Name" />, the same Custom_Name must be used to rename all the respective marker names within Marker and MarkerSticks elements of the XML.
Marker Colors: For each marker in a skeleton, there will be a respective name and color definitions under the Markerssection of the XML. To change corresponding marker colors for the template, edit the RGB parameter and save the XML file.
Marker Sticks: A marker stick is simply a line interconnecting two labeled markers within the skeleton. Each marker stick definition consists of two marker labels for creating a marker stick and a RGB value for its color. To modify the marker sticks, edit the marker names and the color values. You can also define additional marker sticks by copying the format from the other marker stick definitions.
Creating new skeletons
Now that you have customized the XML file, it can be loaded each time when creating new skeletons. In the Builder pane under skeleton creation options, select the corresponding Marker Set. Then, under the Marker Names drop down menu, choose (…) to browse to import the XML file. When you Create the skeleton, the custom marker labels, marker sticks, and marker labels will be applied. You will need to auto-label the take again if you are working on a recorded TAK file.
If you manually added extra markers to a skeleton, you must rename the skeleton after creating it. See more at the Added Markers section.
Renaming Markers on existing Skeleton
You can also apply customized XML into an existing skeleton using the renaming feature. Right-click on a skeleton asset in the Project pane and select the Rename Markers from the context menu, and this will bring up a dialog window for importing a skeleton XML template. Import the customized XML template and modified labels will be applied to the asset. This feature must also be used if extra markers were added to the default XML template.
In order to replace the existing labels with the modified labels, you will need to first delete the existing markers labels and auto-label the skeleton asset again with the renamed markers, or you can Reconstruct and Auto-label the entire Take again.
XML definitions can also be applied to added markers on a skeleton asset. When extra markers were added to a skeleton, its exported XML file will have the corresponding marker labels logged at the end of the MarkerNames section, and the labels for these markers can be customized from the exported XML file. The marker color and sticks definitions for extra markers will not be created automatically. To assign the marker colors and sticks for the extra markers, you will need to type additional instances which exactly copies the format that is used in other instances.
A newly created skeleton will not contain the added markers within the asset. To apply the customized XML for the extra markers, you must first create a skeleton of the same markerset and add the extra markers before importing the XML. When adding multiple markers, it is important that they are added in exactly the same order that it was added on the skeleton which was exported; Otherwise, the extra labels will be assigned incorrectly. After adding the corresponding markers, use the Rename Markers feature to apply the customized XML file. Lastly, auto-labeled the Take to assign the corresponding marker definitions onto the skeleton.
Applying Customized XML with Added Markers
[Motive: Builder pane] Create the skeleton using the markers without including the extra markers that were added. These markers will be added on the next step.
[Motive: Perspective View pane] Add the extra markers onto the selected skeleton asset. See how to add markers.
[Motive:Assets pane] Select the skeleton and click Rename Markers to import the customized skeleton XML template.
[Motive:Assets pane] When working with a recorded Take, first delete the existing marker labels using the Delete Marker Labels from the Assets pane, and auto-label the Take to label the markers using the imported XML file.
In Motive, the Labeling pane can be accessed under the View tab or by clicking icon on the main toolbar.
For more explanation on the labeling workflow, read through the Labeling workflow page.
White Label
The assigned label is assigned to a marker in the current frame
Orange Label
The marker exists in the current frame, but it is unlabeled.
Red Label
The marker is not tracked in the current frame.
Assign labels to a selected marker for all, or selected, frames in a capture.
Applies labels to a marker within the frame range bounded by trajectory gaps and spikes (erratic change). The Max Spike value sets the threshold for spikes which will be used to set the labeling boundary. The Max Gap size determines the tolerable gap size in a fragment, and trajectory gaps larger than this value will set the labeling boundary. This setting is efficient when correcting labeling swaps.
Max Gap
This sets the tolerable gap sizes for both gap ends of the fragment labeling.
Max Spike
Sets the max allowable velocity of a marker (mm/frame) for it to be considered as a spike.
In Motive, the Status Log pane can be accessed under the View tab or by clicking the icon on the main toolbar.
The Status Log pane logs important events or statuses of the system operation. Actively occurring events are listed under the Current section and all of the events are logged under the History section for the record. The log can be exported into a text file for troubleshooting references.
In general, when there are no errors in the system operation, the Current section of the log will remain free of warning or error messages. Occasionally during system operations, however, the error/warning messages (e.g. Dropped Frame, Discontinuous Frame ID) may pop-up momentarily and disappear afterward. This could occur when Motive is changing its configurations; for example, when switching between Live and Edit modes or when re-configuring the synchronization settings. This is a common behavior, and this does not necessarily indicate system errors as long as the messages do not persist in the Current section. If the error message is continuously persisting under the Current section or have a high number of event counts, it is indicating an issue with the system operation.
Status messages are categorized into three categories: Informational, Warning, and Error. Logged status messages on the history list can be filtered through choosing a specific category under the Display Filter section. Status messages will appear in a chronological order with corresponding timestamps, which indicate the number of seconds past since the software start.
Symbol Convention
Note: This table is not an exhaustive list of messages in the Log pane.
This page provides information on the Probe pane, which can be accessed under the Tools tab or by clicking on the icon from the toolbar.
This section highlights what's in the Probe pane. For detailed instructions on how to use the Probe pane to collect measurement samples, read through .
The Probe Calibration feature under the Rigid Body edit options can be used to re-calibrate a pivot point of a measurement probe or a custom Rigid Body. This step is also completed as one of the calibration steps when first creating a measurement probe, but you can re-calibrate it under the Modify tab.
In Motive, select the Rigid Body or a measurement probe.
Bring out the probe into the tracking volume where all of its markers are well-tracked.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Click Start
Once it starts collecting the samples, slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
When sufficient samples are collected, the mean error of the calibrated pivot point will be displayed.
Click Apply to use the calibrated definition or click Cancel to calibrate again.
The Digitized Points section is used for collecting sample coordinates using the probe. You can select which Rigid Body to use from the drop-down menu and set the number of frames used to collect the sample. Clicking on the Sample button will trigger Motive to collect a sample point and save it into the C:\Users\[Current User]\Documents\OptiTrack\measurements.csv
file.
When needed, export the measurements of the accumulated digitized points into a separate CSV file, and/or clear the existing samples to start a new set of measurements
Shows the live X/Y/Z position of the calibrated probe tip.
Shows the live X/Y/Z position of the last sampled point.
Shows the distance between the last point and the live position of the probe tip.
Shows the distance between the last two collected samples.
Shows the angle between the last three collected samples
When a force plate is selected in Motive, its device information gets listed under the . For configuring force plate properties, use the and modify the corresponding device properties.
For more information, read through the force plate setup pages:
Advanced Settings
The Properties: Force Plates contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Force Plate Group Properties:
Group policy is enforced for the force plates that are from the same vendors. This means most of the force plate properties are shared within the force plate groups. Shared settings include the enabled status, sampling rates, and sync modes. These settings should be configured the same for all force plates in most cases. If you need to disable a specific force plate among the group, this will need to be done by powering off the amplifier or disabling the device from the Windows Device Manager.
Enables or disables selected force plate. Only enabled force plates will be shown in Motive and be used for data collection.
Select whether the force plate is synchronized through a recording trigger. This must be set to Device when force plates are synchronized through recording trigger signal from the eSync. This must be set to None when synchronizing through a clock signal.
When set to true, the force plate system synchronizes by reference to an external clock signal. This must be enabled for the reference clock sync. When two systems syncs using the recording trigger, this must be turned off.
Indicates the output port on the eSync that is used for synchronizing the selected force plate. This must match the output port on the eSync that is connected to the force plate amplifier and sending out the synchronization signal.
Resulting data acquisition rate of the force plates. For reference clock sync setups, it will match the frequency of the clock signal. For triggered sync setups, this will match the multiple of the camera system frame rate.
Assigned number of the force plates.
Name of the Motive asset associated with the selected device. For Manus Glove integration, this must match the name of the Skeleton.
Name of the selected force plate.
Model number of the force plate
Force plate serial number.
Number of active channels available in the selected device. For force plates, this defaults to 6 with channels responsible for measuring 3-dimensional force and moment data.
Indicates the state that the force plate is in. If the force plate is streaming the data, it will be indicated Receiving Data. If the force plate is on standby for data collection, it will be indicated Ready.
Size scale of the resultant force vector shown in the 3D viewport.
Length of the force plate.
Width of the force plate.
Manufacturer defined electrical-to-mechanical offset values.
Lists out positions of the four force plate corners. Positions are measured with respect to the global coordinate system, and this is calibrated when you Set Position using the CS-400 calibration square.
In the Edit mode, when this option is enabled, Motive will access the recorded 2D data of a current Take. In this mode, Motive will be live-reconstructing from recorded 2D data and you will be able to inspect the reconstructions and marker rays from the view ports. For more information: .
The star mark allows users to mark the best Takes. Simply click on the star icon and mark the successful Takes.
The health status column of the Takes indicates the user-selected status of each take:
: Excellent capture
: OK capture
: Poor capture
Indicates whether exists on the corresponding Take
Indicates whether the reconstructed exists on the corresponding Take.
If 3D data does not exist on a Take, it can be derived from 2D data by performing the reconstruction pipeline. See page for more details.
Indicates whether exist in the Take. Reference videos are recorded from cameras that are set to either MJPEG grayscale or raw grayscale modes.
Indicates whether any of the assets have baked into it.
Indicates whether synchronized audio data have been recorded with the Take. See:
Indicates whether analog data recorded using a data acquisition device exists in the Take. See: page.
Timecode stamped to the starting frame of the Take. This is available only if there was signal integrated to the system.
Merges two two trajectories together. This feature is useful when used with the graphs. Select two trajectories and click this button to merge the top trajectory into the bottom trajectory.
Merges two two trajectories together. This feature is useful when used with the graphs. Select two trajectories and click this button to merge the bottom trajectory into the top trajectory.
To create a Marker Set, click the icon under the Assets pane and select New Marker Set.
Once a Marker Set asset is created, its list of labels can be managed using the Marker Sets pane. First of all, Marker Set assets must be selected in Motive and the corresponding asset will be listed on the Marker Set pane. Then, new marker labels can be added by clicking the Icon. If you wish to create multiple marker labels at once, they can added by typing in the labels or copying and pasting a list of labels (a carriage-return delimited) from the windows clipboard onto the pane as shown in the image below..(Press Ctrl+V in the Marker List window).
Function | Icon | Description |
---|---|---|
Labeling pane includes a list of marker labels that are associated with the capture. The color of each label tells whether the marker is tracked in the current frame, and the corresponding gap percentage is indicated next to each label. When a marker set is chosen under the Selected dropdown menu, only associated labels will be listed. In addition, the marker set selection can also be linked to 3D selection in the perspective view pane when the Link to 3D button is enabled.
Symbol | Message | Description |
---|---|---|
Multiplier applied to the camera system frame rate. This is available only for triggered sync and can also be configured from the . The resulting rate decides the sampling rate of the force plates.
QuickLabel Mode
Switch to the QuickLabel Mode, which allows assigning selected labels with just one-click.
Select Mode
Switch back to the Select Mode, which is used for normal operations.
Split Column View
Splits the list of labels into two columns for organization purposes. Unlabeled trajectories will be sorted on the right column, and the selected marker set labels are sorted on the left column.
Apply Labels to Previous Frames
When this button is enabled, the marker labels will be applied to same marker trajectories from current frames backward. When disabled, the labels will not be assigned for previous frames.
Apply Labels to Upcoming Frames
When this button is enabled, the marker labels will be applied to same marker trajectories from current frames forward. When disabled, the labels will not be assigned for upcoming frames.
Increment Label Selection
This button is used to set the selection advancement behavior as each label is assigned. Available settings are:
Do not increment: Selection stays the same after labeling
Go to next label: Selection advances to the next label on the list
Go to next unlabeled marker: Selection advances to the next unlabeled marker on the list.
Auto-Label
Performs auto-labeling for selected Takes in the session
Unlabel Selected
Unlabels selected trajectories.
Camera Calibration Updated ( {#} mm/ray mean error)
Continuation calibration feature has updated and improve the camera calibration.
Plugin Device Created: {Name}
The plugin device object for an external device (e.g. force plate and NIDAQ) has been successfully created.
Plugin Device Registered: {Name}
The plugin device has been registered in Motive.
Loaded Plugin: {Directory}
Plugin DLL in the {Directory} has been loaded.
Streaming: Duplicate Frame
Notifying that a duplicate frame has been sent out through the data stream.
Streaming: Discontinuous Frame ID.
Notifying that the streamed frame ID was discontinuous.
Network client connect request received.
A NatNet client application has requested to connect to the server application, Motive.
Network client disconnect request received.
A NatNet client application has has requested to disconnect from the server application, Motive.
Network client validation request received.
A NatNet client application is requesting validation in order to connect to the server application, Motive.
Continuous Calibration: (Status)
Evaluating: Indicates that the continuous calibration feature is assessing the calibration quality.
Sampling: Indicates that the continuous calibration feature is sampling reconstructions for updating the calibration.
Solving: Indicates that the continuous calibration feature is solving and updating the calibration.
Continuous calibration updated
Indicates that the calibration have been automatically updated. Updated mean error value will also be reported.
CAM Camera #: Not Receiving Frame Data.
Indicates that the Camera (#) is not receiving frame data. This could be just because the cameras are still waiting to be initialized. If this status persists, it is like due to a hardware problem.
CAM Camera #: Packet Header CRC Fail
Error in the received camera data packet. Data packets from the cameras are invalid.
CAM Synchronization: Invalid Packet Received
Invalid packet was received. Indicates an encounter of networking error on the camera synchronization.
CAM Synchronization: Packet Header CRC Fail
Error in the received synchronization data packet. Indicates an encounter of networking error on the camera synchronization.
CAM Synchronization: Packet Length Fail
Received packet length invalid. Indicates an encounter of networking error on the camera synchronization.
2D: Camera Stalled
Cameras are stalled. Please check the cable connection and make sure appropriate cable type is used. You would also want to make sure the cables have electromagnetic interference shielding. When cables without the shielding are bundled close together, they can interfere with each other and cause the cameras to stall. Please note that flat Ethernet cables often do not have electromagnetic interference shielding.
CAM Camera #: Dropped Frame
The received frame was invalid and it was dropped. Cameras are not working correctly.
CAM Synchronization: Dropped Frame
Data synchronization failed and the frame has been dropped.
The Properties pane can be accessed by clicking on the icon on the toolbar.
The Properties pane lists out the settings configured for selected objects. In Motive, each type of asset has a list of associated properties, and you can access and modify them using the Properties pane. These properties determine how the display and tracking of the corresponding items are done in Motive. This page will go over all of the properties, for each type of asset, that can be viewed or configured in Motive.
Properties will be listed for recorded Takes, Rigid Body assets, Skeleton assets, force plate device, and NI-DAQ device. Detailed descriptions on each corresponding properties are documented on the following pages:
Selected Items
The Properties pane contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
When a camera, or a camera group, is selected from the Devices pane, related camera settings will be displayed in the Properties pane. From the Properties pane, you can configure the camera settings so that it is optimized for your capture application. You can enable/disable IR LEDs, change exposure length of the cameras, set the video mode, apply gain to the capture frames, and more. This page lists out properties of the cameras and what they are used for.
Advanced Settings
The Properties: Camera contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Enables/disables selected cameras. When cameras are disabled, they don't record any data nor contribute to the reconstruction of 3d data.
Shows the frame rate of the camera. The camera frame rate can only be changed within the devices pane.
This setting determines whether or not selected cameras contribute to the real-time reconstruction.
[Advanced] When this is set to on, the 2D data from selected cameras will contribute to the continuous calibration updates.
Shows the rate multiplier or divider applied to the master frame rate. The master frame rate depends on the sync configuration.
Sets the amount of time that the camera exposes per frame. The minimum and maximum values will depend on both the type of camera and the frame rate. Higher exposure will allow more light in, creating a brighter image that can increase visibility for small and dim markers. However, setting exposure too high can introduce false markers, larger marker blooms, and marker blurring--all of which can negatively impact marker data quality. Exposure value is measured in scanlines for tracking bars and Flex3 series cameras, and in microseconds for Flex13, S250e, Slim13E, and Prime Series cameras.
Defines the minimum brightness for a pixel to be seen by a camera, with all pixels below the threshold being ignored. Increasing the threshold can help filter interference by non-markers (e.g. reflections and external light sources), while lowering the threshold can allow dimmer markers to be seen by the system (e.g. smaller markers at longer distances from the camera).
This setting enables or disables the IR LED ring on selected cameras. For tracking passive retro-reflective markers, this setting must be set to true to illuminate the IR LED rings for tracking. If the IR illumination is too bright for the capture, you can decrease the camera exposure setting to decrease the amount of light received by the imager; dimming the overall captured frames.
Sets the video type of the selected camera.
Sets the camera to view either visible or IR spectrum on cameras equipped with a Filter Switcher. When enabled, the camera captures in IR spectrum, and when disabled, the camera captures in visible spectrum.Infrared Spectrum should be selected when the camera is being used for marker tracking applications. Visible Spectrum can optionally be selected for full frame video applications, where external, visible spectrum lighting will be used to illuminate the environment instead of the camera’s IR LEDs. Common applications include reference video and external calibration methods that use images projected in the visible spectrum.
Sets the imager gain level for the selected cameras. Gain settings can be adjusted to amplify or diminish the brightness of the image. This setting can be beneficial when tracking at long ranges. However, note that increasing the gain level will also increase the noise in the image data and may introduce false reconstructions. Thus, before deciding to change the gain level, adjust the camera settings first to optimize the image clarity.
[Advanced] This property indicates whether the selected camera has been calibrated or not. This is just an indication of whether the camera has been processed through the calibration wanding, but it does not validate the quality of the camera calibration.
Basic information about the selected camera gets listed in the Details section
Displays the camera number assigned by Motive.
Displays the model of a selected camera.
Displays the serial nubmer of a selected camera.
Displays focal length of the lens on the selected camera.
When this is enabled, the estimated field of view (FOV) of the selected camera will be shown in the perspective viewport.
Show of hide frame delivery information from the selected camera. The frame delivery information is used for diagnosing how fast each camera is delivering its frame packets. When enabled, the frame delivery information will be shown in the camera views.
Show or hide the guide reticle when using the Aim Assist button for aiming the cameras.
Prime color cameras also have the following properties that can be configured:
Default: 1920, 1080
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum allowable frame rate will vary. Below is the maximum allowed frame rates for each respective resolution setting.
Default: Constant Bit Rate.
This property determines how much the captured images will be compressed. The Constant Bit-Rate mode is used by default and recommended because it is easier to control the data transfer rate and efficiently utilize the available network bandwidth.
Constant Bit-Rate
In the Constant Bit-Rate mode, Prime Color cameras vary the degree of image compression to match the data transmission rate given under the Bit Rate settings. At a higher bit-rate setting, the captured image will be compressed less. At a lower bit-rate setting, the captured image will be compressed more to meet the given data transfer rate, but compression artifacts may be introduced if it is set too low.
Variable Bit-Rate
Variable Bit-Rate setting is also available for keeping the amount of the compression constant and allowing the data transfer rate to vary. This mode can be beneficial when capturing images with objects that have detailed textures because it keeps the amount of compression same on all frames. However, this may introduce dropped frames whenever the camera tries to compress highly detailed images because it will increase the data transfer rate; which may overflow the network bandwidth as a result. For this reason, we recommend using the Constant Bit-Rate setting in most applications.
Default: 50
Available only while using Constant Bit-rate Mode
Bit-rate setting determines the transmission rate outputted from the selected color camera. The value given under this setting is measured in percentage (100%) of the maximum data transmission speed, and each color camera can output up to ~100 MBps. In other words, the configured value will indirectly represent the transmission rate in Megabytes per second (MBps). At bit-rate setting of 100, the camera will capture the best quality image, however, it could overload the network if there is not enough bandwidth to handle the transmitted data.
Since the bit-rate controls the amount of data outputted from each color camera, this is one of the most important settings when properly configuring the system. If your system is experiencing 2D frame drops, it means one of the system requirements is not met; either network bandwidth, CPU processing, or RAM/disk memory. In such cases, you could decrease the bit-rate setting and reduce the amount of data output from the color cameras.
Image Quality
The image quality will increase at a higher bit-rate setting because it records a larger amount of data, but this will result in large file sizes and possible frame drops due to data bandwidth bottleneck. Often, the desired result is different depending on the capture application and what it is used for. The below graph illustrates how the image quality varies depending on the camera framerate and bit-rate settings.
Tip: Monitoring data output from each camera
Default : 24
Gamma correction is a non-linear amplification of the output image. The gamma setting will adjust the brightness of dark pixels, mid-tone pixels, and bright pixels differently, affecting both brightness and contrast of the image. Depending on the capture environment, especially with a dark background, you may need to adjust the gamma setting to get best quality images.
By modifying the device properties of the OptiHub, users can customize the sync configurations of the camera system for implementing external devices in various sync chain setups. This page directly lists out the properties of the OptiHub. For general instructions on customizing sync settings for integrating external devices, it is recommended to read through the External Device Sync Guide: OptiHub 2 guide.
While the OptiHub is selected under the Devices pane, use the Properties pane to view and configure its properties. By doing so, users can set the parent sync source for the camera system, configure how the system reacts to input signals, and also which signals to output from the OptiHub for triggering other external acquisition devices.
This option is only valid if the Sync Input: Source is set to Internal Sync. Controls the frequency in Hertz (Hz) of the OptiHub 2's internal sync generator. Valid frequency range is 8 to 120 Hz.
This option is only valid if the Sync Input: Source is set to Sync In or USB Sync. Controls synchronization delay in microseconds (us) between the chosen sync source signal and when the cameras are actually told to expose. This is a global system delay that is independent of, and in addition to, an individual camera's exposure delay setting. Valid range is 0 to 65862 us, and should not exceed one frame period of the external signal.
To setup the sync input signals, first define a input Source and configure desired trigger settings for the source:
Internal/Wired sets the OptiHub 2 as the sync source. This is the default sync configuration which uses the OptiSync protocol for synchronizing the cameras. The Parent OptiHub 2 will generate an internal sync signal which will be propagated to other (child) OptiHub 2(s) via the Hub Sync Out Jack and Hub Sync In Jack. For V100:R1(legacy) and the Slim 3U cameras, Wired Sync protocol is used. In this mode, the internal sync signal will still be generated but it will be routed directly to the cameras via daisy-chained sync cables.
Sync In sets an external device as the sync source.
USB Sync sets an external USB device as the sync source. This mode is for customers who use the Camera SDK development kits and would like to have their software trigger the cameras instead. Using the provided API, the OptiHub 2 will be send the trigger signal from the PC via the OptiHib 2's USB uplink connection to the PC.
The Internal/Wired input source uses the OptiHub 2's internal synchronization generator as the main sync source. You can modify the synchronization frequency for both Wired and OptiSync protocol under the Synchronization Control section. When you adjust the system frame rate from this panel, the modified frame rate may not be reflected on the Devices pane. Check the streaming section of the status bar for the exact information.
This option is only valid if the Sync Input: Source is set to Internal Sync. Controls the frequency in Hertz (Hz) of the OptiHub 2's internal sync generator, and the this frequency will control the camera system frame rate. Valid frequency range is 8 to 120 Hz.
The Sync In input source setting uses signals coming into the input ports of the OptiHub 2 to trigger the synchronization. Please refer to External Device Sync Guide: OptiHub 2 page for more instructions on this.
Detects and displays the frequency of the sync signal that's coming through the input port of the parent OptiHub 2, which is at the very top of the RCA sync chain. When sync source is set to Sync In, the camera system framerate will be synchronized to this input signal. Please note that OptiHub 2 is not designed for precise sync, so there may be slight sync discrepancies when synchronizing through OptiHub 2.
Manually adds global sync time offset to how camera system reacts to the received input signal. The input unit is measured in microseconds.
Can select from Either Edge, Rising Edge, Falling Edge, Low Gated, or High Gated signal from the connected input source.
Allows a triggering rate compatible with the camera frame rate to be derived from higher frequency input signals (e.g. 300Hz decimated down to 100Hz for use with a V100:R2 camera). Valid range is 1 (no decimation) to 15 (every 15th trigger signal generates a frame).
(The camera system will be the child) sets an external USB device as the sync source. This mode is for customers who use the Camera SDK development kits and would like to have their software trigger the cameras instead. Using the provided API, the OptiHub 2 will be send the trigger signal from the PC via the OptiHib 2's USB uplink connection to the PC.
Detects and displays the frequency of the parent source.
Allows the user to allow or block trigger events generated by the internal sync control. This option has been deprecated for use in the GUI. Valid options are Gate-Open and Gate-Closed.
Allows a triggering rate compatible with the camera frame rate to be derived from higher frequency input signals (e.g. 360Hz decimated down to 120Hz for use with a Flex 13 camera). Valid range is 1 (no decimation) to 15 (every 15th trigger signal generates a frame).}}
Sync signals can also be sent out through the output ports of the OptiHub 2 to child devices in the synchronization chain. Read more: External Device Sync Guide: OptiHub 2.
Selects condition and timing for a pulse to be sent out over the External Sync Out jack. Available Types are: Exposure Time, Pass-Through, Recording Level, and Recording Pulse.
Polarity
Selects output polarity of External Sync Out signal. Valid options are: Normal and Inverted. Normal signals are low and pulse high and inverted signals are high and pulse low.
Skeleton properties determine how Skeleton assets are tracked and displayed in Motive.
To view related properties, select a Skeleton asset in the Assets pane or in the 3D viewport, and the corresponding properties will be listed under the Properties pane. These properties can be modified both in Live and Edit mode. Default creation properties are listed under the Application Settings.
Advanced Settings
The Properties: Skeleton contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Shows the name of selected Skeleton asset.
Enables/disables both tracking of the selecting Skeleton and its visibility under the perspective viewport.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to be booted or first tracked.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to continue to be tracked after the initial boot.
[Advanced] Euler angle rotation order used for calculating the bone hierarchy.
Selects whether or not to display the Skeleton name in the 3D Perspective View.
Selects how the Skeleton will be shown in the 3D perspective view.
Segment: Displays Skeleton as individual Skeleton segments.
Avatar (male): Displays Skeleton as a male avatar.
Avatar (female): Displays Skeleton as a female avatar.
Sets the color of the Skeleton.
This feature is supported in Live mode and 2D mode only. When enabled, the color of the Skeleton segments will change whenever there are tracking errors.
Show or hide Skeleton bones.
[Advanced] Displays orientation axes of each segments in the Skeleton.
[Advanced] Shows the Asset Model Markers as transparent spheres on each Skeleton segment. The asset mode markers are the expected marker locations according to the Skeleton solve.
[Advanced] Draws lines between labeled Rigid Body or Skeleton markers and corresponding expected marker locations. This helps to visualize the offset distance between actual marker locations and the asset model markers.
[Advanced] Displays lines between each Skeleton markers and their associated Skeleton segments.
Applied double-exponential smoothing to translation and rotation of a Rigid Body or a skeletal bone. Disabled at 0.
Compensate for system latency by predicting bone movements into the future. For this feature to work best, smoothing needs to be applied as well. Disabled at 0.
[Advanced] When needed, you can damp down translational and/or rotational tracking of a Rigid Body or a Skeleton bone on selected axis.
Rigid body properties determine how the corresponding Rigid Body asset is tracked and displayed in the viewport.
To view related properties, select a Rigid Body asset in the Assets pane or in the 3D viewport, and the corresponding properties will be listed under the Properties pane. These properties can be modified both in Live and Edit mode. Default creation properties are listed under the Application Settings.
Advanced Settings
The Properties: Rigid Body contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Allows a custom name to be assigned to the Rigid Body. Default is "Rigid Body X" where x is the Rigid Body ID.
Enables/Disables tracking of the selected Rigid Body. Disabled Rigid Bodies will not be tracked, and its data will not be included in the exported or streamed tracking data.
User definable ID for the selected Rigid Body. When working with capture data in the external pipeline, this value can be used to address specific Rigid Bodies in the scene.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to be booted or first tracked.
The maximum displacement a Rigid Body marker ca deviate from its calibrated position before it becomes unlabeled.
Smoothing
Applies double exponential smoothing to translation and rotation of a Rigid Body. Disabled at 0.
Forward Prediction
Compensation for system latency by predicting a Rigid Body's movement into the future. For this feature to work best, smoothing needs to be applied as well.
Tracking Algorithm
Tracking algorithm used for Rigid Body tracking.
Color of the selected Rigid Body in the 3D Perspective View. Clicking on the box will bring up the color picker for selecting the color.
Selects whether or not to display the Rigid Body name in the 3D Perspective View. If selected, a small label in the same color as the Rigid Body will appear over the centroid in the 3D Perspective View.
Enables the display of a Rigid Body's local coordinate axes. This option can be useful in visualizing the orientation of the Rigid Body, and for setting orientation offsets.
Shows a history of the Rigid Body’s position. When enabled, you can set the history length and the tracking history will be drawn in the Perspective view.
Show historical orientation axes.
Shows Rigid Body when tracked.
Untracked Markers
Shows Rigid Body when not tracked.
Pivot
Show Rigid Body's pivot point.
Assigned Markers
Shows Rigid Body's assigned markers.
Pivot Scale
Scales the size of the Rigid Body's pivot point.
Quality
When this is set to true, links drawn between Rigid Body markers will tween to red as deflection approaches max deflection setting.
Maker Quality
When set to true, expected markers of the Rigid Body will change its color to red as it approaches the max deflection setting.
Model Replace
When true and a valid geometric model is loaded, the model will draw instead of the Rigid Body.
Attached Geometry setting will be visible if the Model Replace setting is enabled. Here, you can load an OBJ file to replace the Rigid Body. Scale, positions, and orientations of the attached geometry can be configured under the following section also. When a OBJ file is loaded, properties configured in the corresponding MTL files alongside the OBJ file will be loaded as well.
Attached Geometry Settings
When the Attached Geometry is enabled, you can attach a 3D model to a Rigid Body and the following setting will be available also.
Pivot Scale: Adjusts the size of the Rigid Body pivot point.
Scale: Rescales the size of attached object.
Yaw (Y): Rotates the attached object in respect to the Y-axis of the Rigid Body coordinate axis.
Pitch (X): Rotates the attached object in respect to the X-axis of the Rigid Body coordinate axis.
Roll (Z): Rotates the attached object in respect to the Z-axis of the Rigid Body coordinate axis.
X: Translate the position of attached object in x-axis in respect to the Rigid Body coordinate.
Y: Translate the position of attached object in y-axis in respect to the Rigid Body coordinate.
Z: Translate the position of attached object in z-axis in respect to the Rigid Body coordinate.
Opacity: Sets the opacity of an attached object. An OBJ file typically comes with a corresponding MTL file which defines its properties, and the transparency of the object is defined within these MTL files. The Opacity value under the Rigid Body properties applies a factor between 0 ~ 1 in order to rescale the loaded property. In other words, you can set the transparency in the MTL file and rescale them using the Opacity property in Motive.
Uplink ID assigned to the Tag or Puck using the Active Batch Programmer. This ID must match with the Uplink ID assigned to the Active Tag or Puck that was used to create the Rigid Body.
Radio frequency communication channel configured on the Active Tag, or Puck, that was used to define the corresponding Rigid Body. This must match the RF channel configured on the active component; otherwise, IMU data will not be received.
Applies double exponential smoothing to translation and rotation of the Rigid Body. Increasing this setting may help smooth out noise in the Rigid Body tracking, but excessive smoothing can introduce latency. Default is 0 (disabled).
Compensate for system latency when tracking of the corresponding Rigid Body by predicting its movement into the future. Please note that predicting further into the future may impact the tracking stability.
[Advanced] When needed, you can damp down translational and/or rotational tracking of a Rigid Body or a Skeleton bone on selected axis.
The Reference View pane is used to monitor captured videos from the reference cameras. Up to two reference cameras can be monitored on each pane. This pane can be accessed under the View tab → Reference Overlay or simply by clicking on one of the reference view icons from the main toolbar ().
Cameras can be set to a reference view from the or by configuring video type to grayscale modes.
In this pane, cameras, markers, and trackable assets can be overlayed over the reference view. This is a good way of monitoring events during the capture. All of the assets and trajectory histories under the Perspective view pane can be overlayed on the reference videos from this pane.
Note: that the overlayed assets will not be rendered into exported reference videos.
When a is selected from the , related information will be displayed in the .
From the Properties pane, you can get the general information about the Take, including the total number of recorded frames, capture data/time, and the list of assets involved in the recording. Also, when needed, the solver settings that were used in the recorded TAK can be modified, and these changes will be applied when performing post-processing reconstruction.
Take name
The camera frame rate in which the take was captured. The Take file will contain the corresponding number of frames for each second.
The frame ID of the first frame saved on the Take.
The frame ID of the last frame saved on the Take.
A timestamp of when the recording was first captured started.
A timestamp of when the recording was ended.
Names of assets that are included in the Take
Comments regarding the take can be noted here for additional information.
Date and time when the capture was recorded.
The version of Motive which the Take was recorded in. (This applies only to Takes that were captured in versions 1.10 or above)
The build of Motive which the Take was recorded in.
The data quality of the Take which can be flagged by users.
Progress indicator for showing how into the post-processing workflow that this Take has made.
Camera system calibration details for the selected Take. Takes recorded in older versions of Motive may not contain this data.
Shows when the cameras were calibrated.
Displays a mean error value of the detected wand length samples throughout the wanding process.
Displays percentile distribution of the wand errors.
Shows what type of wand was used: Standard, Active, or Micron series.
Displays the length of the calibration wand used for the capture.
Distance from one of the end markers to the center marker, specifically the shorter segment.
When an NI-DAQ device is selected in Motive, its device information gets listed under the . Just basic information on the used device will be shown in the . For configuring properties of the device, use the .
For more information, read through the NI-DAQ setup page: .
Advanced Settings
The Properties: NI-DAQ contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Only enabled NI-DAQ devics will be actively measuring analog signals.
This setting determines how the recording of the selected NI-DAQ device will be triggered. This must be set to None for reference clock sync and to Device for recording trigger sync.
None: NI-DAQ recording is triggered when Motive starts capturing data. This is used when using the reference clock signal for synchronization.
Device: NI-DAQ recording is triggered when a recording trigger signal to indicate the record start frame is received through the connected input terminal.
(available only when Trigger Sync is set to Device) Name of the NI-DAQ analog I/O terminal where the recording trigger signal is inputted to.
This setting sets whether an external clock signal is used as the sync reference. For precise synchronization using the internal clock signal sync, set this to true.
True: Setting this to true will configure the selected NI-DAQ device to synchronize with an inputted external sample clock signal. The NI-DAQ must be connected to an external clock output of the eSync on one of its digital input terminals. The acquisition rate will be disabled since the rate is configured to be controlled by the external clock signal.
False: NI-DAQ board will collect samples in 'Free Run' mode at the assigned Acquisition Rate.
(available only when Reference Clock Sync is set to True) Name of the NI-DAQ digital I/O terminal that the external clock (TTL) signal is inputted to.
Set this to the output port of the eSync where it sends out the internal clock signal to the NI-DAQ.
Shows the acquisition rate of the selected NI-DAQ device(s).
Depending on the model, NI-DAQ devices may have different sets of allowable input types and voltage ranges for their analog channels. Refer to your NI-DAQ device User's Guide for detailed information about supported signal types and voltage ranges.
(Default: -10 volts) Configure the terminal's minimum voltage range.
(Default: +10 volts) Configure the terminal's maximum voltage range.
Terminal: RSE Referenced single ended. Measurement with respect to ground (e.g. AI_GND) (Default)
Terminal: NRSE NonReferenced single ended. Measurement with respect to single analog input (e.g. AISENSE)
Terminal: Diff Differential. Measurement between two inputs (e.g. AI0+, AI0-)
Terminal: PseudoDiff Differential. Measurement between two inputs and impeded common ground.
[Advanced] Name of the selected device.
Device model ID, if available.
Device serial number of the selected NI-DAQ assigned by the manufacturer.
Type of device.
Total number of available channels on the selected NI-DAQ device.
[Advanced]What mode of Motive playback being used.
Whether the device is ready or not.
Tristate status of either Need Sync, Ready for Sync, or Synced. Updates the "State" icon in the Devices pane.
[Advanced] Internal device number.
User editable name of the device.
By modifying the device properties of the eSync, users can customize the sync configurations of the camera system for implementing various sync chain setups.
While the eSync is selected under the , use the to monitor the eSync properties. Here, users can configure the parent sync source of the camera system and also the output sync signals from the eSync for integrating child devices (e.g. ). For a specific explanation on steps for synchronizing external devices, read through the following page: .
Configure the input signal by first defining which input source to use. Available input sources include Internal Free Run, Internal Clock, SMPTE Timecode In, Video Gen Lock, Inputs (input ports), Isolated, VESA Stereo In, and Reserved. Respective input configurations appear on the pane when a source is selected. For each selected input source, the signal characteristics can be modified.
Synchronization Input Source Options
Controls the frequency of the eSync 2's internal sync generator when using the internal clock.
Introduces an offset delay, in microsecond, to selected trigger signal.
Sets the trigger mode. Available modes are Either Edge, Rising Edge, and Falling Edge, and each of them uses the corresponding characteristic of the input signal as a trigger.
Allows a triggering rate, compatible with the camera frame rate, to be derived from higher frequency input signals.
Allows a triggering rate, compatible with the camera frame rate, to be derived from lower frequency input signals. Available multiplier range: 1 to 15.
Displays the final rate of the camera system.
eSync2 ports vs eSync ports
In the eSync2, three general input ports are implemented in place of Lo-Z and Hi-Z input ports from the eSync. These general input ports are designed for high impedance devices, but low impedance devices can also be connected with appropriate adjustments. When the eSync 2 is connected to the system, options for Lo-Z and Hi-Z will be displayed.
Lo-Z input: Sets an external low impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Hi-Z input: Sets an external high impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Allows you to configure signal type and polarity of synchronization signal through the output ports, including the VESA stereo output port, on the eSync 2.
Type: Defines the output signal type of the eSync 2. Use this to sync external devices to the eSync 2.
Polarity: Change the polarity of the signal to normal or inverted. Normal signals constantly output a low signal and pulses high when triggering. Inverted signals constantly output a high signal and pulse low when triggering.
Output Signal Types
Trigger Source: Determines which trigger source is used to initiate the recording in Motive. Available options are Software, Isolated, and Inputs. When the trigger source set to software, recording is initiated in Motive.
With the eSync 2, external triggering devices (e.g. remote start/stop button) can integrate into the camera system and set to trigger the recording start and stop events in Motive. Such devices will connect to input ports of the eSync 2 and configured under the Record Triggering section of the eSync 2 properties.
By default, the remote trigger source is set to Software, which is the record start/stop button click events in Motive. Set the trigger source to the corresponding input port and select an appropriate trigger edge when an external trigger source (Trigger Source → isolated or input) is used. Available trigger options include Rising Edge, Falling Edge, High Gated, or Low Gated. The appropriate trigger option will depend on the signal morphology of the external trigger. After the trigger setting have been defined, press the recording button in advance. It sets Motive into a standby mode until the trigger signal is detected through the eSync. When the trigger signal is detected, Motive will start the actual recording. The recording will be stopped and return to the 'armed' state when the second trigger signal, or the falling edge of the gated signal, is detected.
Under the Record Triggering section, set the source to the respective input port where the trigger signal is inputted.
Choose an appropriate trigger option, depending on the morphology of the trigger signal.
Press the record button in Motive, which prepares Motive for recording. At this stage, Motive awaits for an incoming trigger signal.
When the first trigger is detected, Motive starts recording.
When the second trigger is detected, Motive stops recording and awaits for next trigger for repeated recordings. For High Gated and Low Gated trigger options, Motive will record during respective gated windows.
Once all the recording is finished, press the stop button to disarm Motive.
Input Monitor displays the corresponding signal input frequency. This feature is used to monitor the synchronization status of the signals into the eSync 2.
Displays the frequency of the Internal Clock in the eSync 2.
Displays the frequency of the timecode input.
Displays the frequency of the video genlock input.
Displays the frequency of the input signals into the eSync 2.
Displays the frequency of the external low impedance sync device.
Displays the frequency of the external high impedance sync device.
Display the frequency of the external generic sync device.
For internal use only.
Synchronization Input Source Options
Controls the frequency of the eSync 2's internal sync generator when using the internal clock.
Introduces an offset delay, in microsecond, to selected trigger signal.
Sets the trigger mode. Available modes are Either Edge, Rising Edge, and Falling Edge, and each of them uses the corresponding characteristic of the input signal as a trigger.
Allows a triggering rate, compatible with the camera frame rate, to be derived from higher frequency input signals.
Allows a triggering rate, compatible with the camera frame rate, to be derived from lower frequency input signals. Available multiplier range: 1 to 15.
Displays the final rate of the camera system.
eSync ports vs eSync2
In the eSync 2, three general input ports are implemented in place of Lo-Z and Hi-Z input ports from the eSync. These general input ports are designed for high impedance devices, but low impedance devices can also be connected with appropriate adjustments. When the eSync is connected to the system, options for Lo-Z and Hi-Z will be displayed.
Lo-Z input: Sets an external low impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Hi-Z input: Sets an external high impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Allows you to configure signal type and polarity of synchronization signal through the output ports, including the VESA stereo output port, on the eSync2.
Defines the output signal type of the eSync2. Use this to sync external devices to the eSync2.
Polarity
Change the polarity of the signal to normal or inverted. Normal signals constantly output a low signal and pulses high when triggering. Inverted signals constantly output a high signal and pulse low when triggering.
Output Signal Types
Trigger Source: Determines which trigger source is used to initiate the recording in Motive. Available options are Software, Isolated, and Inputs. When the trigger source set to software, recording is initiated in Motive.
Input Monitor displays the corresponding signal input frequency. This feature is used to monitor the synchronization status of the signals into the eSync 2.
Internal Clock: Displays the frequency of the Internal Clock in the eSync 2.
SMTPE Time Code In: Displays the frequency of the timecode input.
Video Genlock In: Displays the frequency of the video genlock input.
Inputs: Displays the frequency of the input signals into the eSync 2.
Lo-Z: Displays the frequency of the external low impedance sync device.
Hi-Z: Displays the frequency of the external high impedance sync device.
Isolated: Display the frequency of the external generic sync device.
Reserved: For internal use only.
Resolution | Max Frame Rate |
---|---|
Data output from the entire camera system can be monitored through the Status Panel. Output from individual cameras can be monitored from the 2D Camera Preview pane when the Camera Info is enabled under the visual aids () option.
Trigger | Description |
---|---|
Output | Description |
---|---|
Marks the best take. Takes that are marked as best can also be accessed via scripts.
Shows mean offset value during calibration.
Displays percentile distribution of the errors.
The camera filter settings in the Take properties determine which IR lights from the recorded 2D camera data contributes to the when re-calulating the 3D data when needed.
For more information on these settings in Live mode, please refer to the page.
The Solver/Reconstruction settings under the Take properties are the 3D data solver parameters that were used to obtain the saved in the Take file. In Edit mode, you can change these parameters and perform the to obtain a new set of 3D data with the modified parameters.
For more information on these settings in Live mode, please refer to the page.
Properties of individual channels can be configured directly from the . As shown in the image, you can click on the icon to bring up the settings and make changes.
Configures the measurement mode of the selected terminal. In general, analog input channels with screw terminals use the single-ended measurement system (RSE), and analog input channels with BNC terminals use the differential (Diff) measurement system. For more information on these terminal types, refer to .
Input Source | Description |
---|
Type | Description |
---|
Note: For capturing multiple recordings via recording trigger, only the first TAK will contain the 3D data. For the subsequent TAKs, the 3D data must be reconstructed through the pipeline.
Open the and the to access the eSync 2 properties.
Input Source |
---|
Type | Description |
---|
960 x 540 (540p)
500 FPS
1280 x 720 (720p)
360 FPS
1920 x 1080 (1080p)
250 FPS
Either Edge
Uses either the rising or falling edge of the pulse signal.
Rising Edge
Uses the rising edge of the pulse signal.
Falling Edge
Uses the falling edge of the pulse signal.
High Gated
High Gated mode triggers when the input signal is at a high voltage level, but stops triggering at a low voltage level.
Low Gated
Low Gated mode triggers when the input signal is at a low voltage level, but stops triggering at a high voltage level.
Exposure Time
Outputs a pulse signal when the cameras expose.
Pass-Through
Passes the input signal to the output.
Recording Gate
Outputs a constant high level signal while recording. Other times the signal is low. (Referred as Recording Level in older versions).
Gated Exposure Time
Outputs a pulse signal when the cameras expose during a recording only. (Referred as Recording Pulse in older versions).
Internal Free Run | This is the default synchronization protocol for Ethernet camera systems without an eSync2. In this mode, Prime series cameras are synchronized by communicating the time information with each other through the camera network itself using a high-precision algorithm for timing synchronization. |
Internal Clock | Sets the eSync 2 to use its internal clock to deliver the sync signal to the Ethernet cameras, and the sync signal can be modified as well. |
SMPTE Timecode In | Sets a timecode sync signal from an external device as the input source signal. |
Video Gen Lock | Locks the camera sync to an external video sync signal. |
Isolated | Used for generic sync devices connected to the Isolated Sync In port from the eSync 2. Considered safer than other general input ports (Hi-Z and Lo-Z). The max signal voltage cannot exceed 12 Volts. |
Inputs | Uses signals through the input ports of the eSync 2. Used for high impedance output devices. The max signal voltage cannot exceed 5 Volts. |
VESA Stereo In | Sets cameras to sync to signal from the VESA Stereo input port. |
Reserved | Internal use only. |
Exposure Time | Outputs a pulse signal when the cameras expose. |
Recording Gate | Outputs a constant high level signal while recording. Other times the signal is low. |
Record Start/Stop Pulse | Outputs a pulse signal both when the system starts and stops recording. |
Gated Exposure Time | Outputs a pulse signal when the cameras expose, when the system is recording. |
Gated Internal Clock | Outputs the internal clock, while the system is recording. |
Selected Sync | Outputs the Sync Input signal without factoring in signal modifications (e.g. input dividers). |
Adjusted Sync | Outputs the Sync Input signal accounting for adjustments made to the signal. |
| Uses a selected input signal to generate the synchronization output signal. |
Internal Free Run | This is the default synchronization protocol for Ethernet camera systems without an eSync 2. In this mode, Prime series cameras are synchronized by communicating the time information with each other through the camera network itself using a high-precision algorithm for timing synchronization. |
Internal Clock | Sets the eSync 2 to use its internal clock to deliver the sync signal to the Ethernet cameras, and the sync signal can be modified as well. |
SMPTE Timecode In | Sets a timecode sync signal from an external device as the input source signal. |
Video Gen Lock | Locks the camera sync to an external video sync signal. |
Isolated | Used for generic sync devices connected to the Isolated Sync In port from the eSync 2. Considered safer than other general input ports (Hi-Z and Lo-Z). The max signal voltage cannot exceed 12 Volts. |
Inputs | Uses signals through the input ports of the eSync2. Used for high impedance output devices. The max signal voltage cannot exceed 5 Volts. |
VESA Stereo In | Sets cameras to sync to signal from the VESA Stereo input port. |
Reserved | Internal use only. |
Exposure Time | Outputs a pulse signal when the cameras expose. |
Recording Gate | Outputs a constant high level signal while recording. Other times the signal is low. |
Record Start/Stop Pulse | Outputs a pulse signal both when the system starts and stops recording. |
Gated Exposure Time | Outputs a pulse signal when the cameras expose, when the system is recording. |
Gated Internal Clock | Outputs the internal clock, while the system is recording. |
Selected Sync | Outputs the Sync Input signal without factoring in signal modifications (e.g. input dividers). |
Adjusted Sync | Outputs the Sync Input signal accounting for adjustments made to the signal. |
| Uses a selected input signal to generate the synchronization output signal. |