Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This wiki contains instructions on operating OptiTrack motion capture systems. If you are new to the system, start with the Quick Start Guides to begin your capture experience.
You can navigate through pages using links in the sidebar or using links included within the pages. You can also use the search bar provided on the top-right corner to search for page names and keywords that you are looking for. If you have any questions that are not documented in this wiki or from other provided documentation, please check our forum or contact our Support for further assistance.
OptiTrack website: http://www.optitrack.com
The Helpdesk: http://help.naturalpoint.com
NaturalPoint Forums: https://forums.naturalpoint.com
For versions of Motive 2.2 or older, please visit our old wiki site.
Markersets
PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models have powerful tracking capability that allows tracking outdoors. With strong infrared (IR) LED illuminations and some adjustments to its settings, a Prime system can overcome sunlight interference and perform 3D capture. This page provides general hardware and software system setup recommendations for outdoor captures.
Please note that when capturing outdoors, the cameras will have shorter tracking ranges compared to when tracking indoors. Also, the system calibration will be more susceptible to change in outdoor applications because there are environmental variables (e.g. sunlight, wind, etc.) that could alter the system setup. To ensure tracking accuracy, routinely re-calibrate the cameras throughout the capture session.
Even though it is possible to capture under the influence of the sun, it is best to pick cloudy days for captures in order to obtain the best tracking results. The reasons include the following:
Bright illumination from the daylight will introduce extraneous reconstructions, requiring additional effort in the post-processing on cleaning up the captured data.
Throughout the day, the position of the sun will continuously change as will the reflections and shadows of the nearby objects. For this reason, the camera system needs to be routinely re-masked or re-calibrated.
The surroundings can also work to your advantage or disadvantage depending on the situation. Different outdoor objects reflect 850 nm Infrared (IR) light in different ways that can be unpredictable without testing. Lining your background with objects that are black in Infrared (IR) will help distinguish your markers from the background better which will help with tracking. Some examples of outdoor objects and their relative brightness is as follows:
Grass typically appears as bright white in IR.
Asphalt typically appears dark black in IR.
Concrete depends, but it's usually a gray in IR.
1. [Camera Setup] Camera mount setup
In general, setting up a truss system for mounting the cameras is recommended for stability, but for outdoor captures, it could be too much effort to do so. For this reason, most outdoor capture applications use tripods for mounting the cameras.
2. [Camera Setup] Camera aim
Do not aim the cameras directly towards the sun. If possible, place and aim the cameras so that they are capturing the target volume at a downward angle from above.
3. [Camera Setup] Lens f-stop
Increase the f-stop setting in the Prime cameras to decrease the aperture size of the lenses. The f-stop setting determines the amount of light that is let through the lenses, and increasing the f-stop value will decrease the overall brightness of the captured image allowing the system to better accommodate for sunlight interference. Furthermore, changing this allows camera exposures to be set to a higher value, which will be discussed in the later section. Note that f-stop can be adjusted only in PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models.
4. [Camera Setup] Utilize shadows
Even though it is possible to capture under sunlight, the best tracking result is achieved when the capture environment is best optimized for tracking. Whenever applicable, utilize shaded areas in order to minimize the interference by sunlight.
1. [Camera Settings] Max IR LED Strength
Increase the LED setting on the camera system to its maximum so that IR LED illuminates at its maximum strength. Strong IR illumination will allow the cameras to better differentiate the emitted IR reflections from ambient sunlight.
2. [Camera Settings] Camera Exposure
In general, increasing camera exposure makes the overall image brighter, but it also allows the IR LEDs to light up and remain at its maximum brightness for a longer period of time on each frame. This way, the IR illumination is stronger on the cameras, and the imager can more easily detect the marker reflections in the IR spectrum.
When used in combination with the increased f-stop on the lens, this adjustment will give a better distinction of IR reflections. Note that this setup applies only for outdoor applications, for indoor applications, the exposure setting is generally used to control overall brightness of the image.
*Legacy camera models
\
Motive 3.0.2 Update:
Following the Motive 3.0.2 release, an internet connection is no longer required for initial use of Motive. If you are currently using Motive 3.0.1 or older, please install this new release from our Software webpage. Please note that an internet connection is still required to download Motive.exe from the OptiTrack website.
Important License Update:
New licensing system in Motive 3. Please check the OptiTrack website for details on Motive licenses.
Security Key (Motive 3.x): Starting from version 3.0, USB Security Key will be required to use Motive. USB Hardware Keys that were used for activating older versions of Motive will no longer work with 3.0, and they will need to be replaced with the USB Security key. For any questions, please contact us.
Hardware Key (Motive 2.x or below): Motive 2.x versions will still follow the same system and will require USB Hardware Key
USB Cameras
USB cameras, including Flex series, tracking bars, and Slim3U, cameras are not supported in 3.x versions currently. For USB camera systems, please use Motive 2.x versions. Go to Motive 2.3 documentation.
For More Information:
Visit our website for more information on the new versions:
What's New: https://www.optitrack.com/software/motive/
Changelog and Download link: https://www.optitrack.com/support/downloads/
With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.
Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the Reconstruction Settings under the Application Settings panel.
When one or more markers are selected in the Live mode or from the 2D Mode of capture data, the corresponding mean residual value is displayed over the Status Panel located at the bottom-right corner of Motive.
First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.
Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.
Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the Hardware Setup pages.
Mounting Locations
For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.
Mount Securely
For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.
Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.
Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the 2D camera preview window in Motive and use the mouse-scroll.
For more information, please read through the Aiming and Focusing workflow page.
The following sections cover key configuration settings which need to be optimized for the precision tracking.
Camera settings are configured using the Devices pane and the Properties pane both of which can be opened under the view tab in Motive.
Details
Number
Varies
Denotes the number that Motive has assigned to that particular camera.
Device Type
Varies
Denotes which type of camera Motive has detected (PrimeX 41, PrimeX 13W, etc.)
Serial Number
Varies
Denotes the serial number of the camera. This information uniquely identifies the camera.
Focal Length
Varies
Denotes the distance between the camera's image sensor and its lens.
General
Enabled
Toggle 'On'
When Enabled is toggled on, the camera is active and able to collect marker data.
Rate
Maximum FPS
Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.
Reconstruction
Toggle 'On'
Denotes when camera is participating in 3D construction.
Rate Multiplier
x1 (120Hz)
Denotes the rate multiplier. This setting is for syncing external devices with the camera system
Exposure
250 μs
Denotes the exposure of the camera. The higher the number, the more microseconds a camera's sensor is exposed to light. If you're having issue with seeing markers, raise the exposure. If there is too much reflection data in the volume, lower the exposure.
Threshold (THR)
200
Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.
LED
Toggle 'On'
In some instances you may want to turn off the IRLEDs on a particular camera. i.e. using an active wand for calibration reduces extraneous reflections from influencing a calibration.
Video Mode
Default: Object Mode
IR Filter
Toggle 'On'
*Special to PrimeX 13/13W, SlimX 13, and Prime Color FS cameras. Toggles from using 850 nm IR filter which allows for only 850 nm IR light to be visible. When toggled off, all light will be visible to the camera's image sensor.
Gain
1: Low (Short Range)
Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.
Display
Show Field of View
Toggle 'Off'
Show Frame Delivery Info
Toggle 'Off'
Live-reconstruction settings can be configured under the application settings panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the Application Settings page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:
< 2.00
Solver Tab: Minimum Rays to Start
≥ 3
Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.
Camera Tab: Minimum Pixel Threshold
≥ 3
Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.
Camera Tab: Circularity
≥ 3
The following calibration instructions are specific to precision tracking. For more general information, refer to the Calibration page.
For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the Calibration pane, select Micron Series under the OptiWand dropdown menu, and define the exact length under the Wand Length.
The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.
Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.
Precision Capture Calibration Tips
Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.
Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.
A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.
Extra wanding in the target area you wish to capture will improve the tracking in the target region.
Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.
Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.
Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.
In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.
For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the circularity filter or the intrusion detection feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group 2D filter settings to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.
There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.
Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.
In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.
An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.
The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the Cameras View in Motive (Cameras View > Eye Icon > Camera Info).
If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.
Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:
Avoid touching the cameras and the camera mounts.
Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.
Closing doors, even from the outside, may be noticeable during recording.
The following methods can be used to check the tracking accuracy and to better optimize the reconstructions settings in Motive.
The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; Perspective View pane > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.
In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page Continuous Calibration.
Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.
Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
Minimize ambient lights, especially sunlight and other infrared light sources.
Clean capture volume. Remove unnecessary obstacles within the area.
Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
See Also: Network setup page.
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require re-calibration. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
See Also: Camera Placement and Camera Mount Structures pages.
In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.
To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.
See Also: Aiming and Focusing page.
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.
If you are new to Motive, we recommend you to read through Motive Basics page after going through this guide to learn about basic navigation controls in Motive.
Motive Activation Requirements
The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.
Motive 3.x license
USB Security Key
Host PC Requirements
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better, 3+ GHz
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers and support for OpenGL 3.2+
OS: Windows 10, 11 (64-bit)
CPU: Intel i7, 3+ GHz
RAM: 8GB of memory
GPU that supports OpenGL 3.2+
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
License Activation Steps
Insert the USB Security Key into a USB-C port on the computer. If needed, you can also use a USB-A adapter to connect.
Launch Motive
Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.
Notes on using USB Security Key
When connecting the USB Security Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.
USB Hardware Key from older version of Motive have been replaced by USB Security Key in Motive 3.x version or above.
Notes on First Connection with a USB Security Key
The Security Key must register with the connected cameras upon initial activation, or when one or more cameras are added to an existing system. This process requires that the host PC be connected to the internet and may take a few minutes. Once the cameras have been registered, an internet connection is no longer required.
By default, Motive will start on the calibration layout with all the necessary panes open. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. The layout may be slightly different for certain camera models or software licenses.
The following panes will be open:
and recorded Takes to view and configure their properties.
The Control Deck, located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).
See Also: List of UI pages from the Motive section of the wiki.
Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse actions and hotkeys. These mouse and keyboard controls can be customized through the Application Settings panel.
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle selection in View
CTRL + left mouse click
Show one viewport
Shift + 1
Horizontally split the viewport
Shift + 2
Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the Devices pane page.
To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the Aim Assist button for supported cameras. In Motive, you can right-click on the Cameras Viewport and switch the video mode in the context menu, or you can also change the video mode through the Properties pane.
Exposure Setting
The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.
Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.
In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. Read through the Calibration page to learn about the calibration process and what other tools are available for more efficient workflows.
See Also: Calibration page.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Starting a Calibration
To start a system calibration, open the Calibration Pane. Under the Calibration pane, you can choose to start a new calibration or to modify the existing one. For this guide, click New Calibration for a fresh calibration.
Masking
Before the system calibration, any extraneous reflections or unnecessary markers should ideally be removed or covered so that they are not seen by the cameras. However, it may not always be possible to remove all of them. In this case, these extraneous reflections can be ignored by applying masks over them during the calibration.
Check the calibration pane to see if any of the cameras are seeing extraneous reflections or noise in their view. A warning sign will appear over these cameras.
Check the camera view of the corresponding camera to identify where the extraneous reflection is coming from, and if possible, remove them from the capture volume or cover them so that the cameras do not see them.
If reflections still exist, click Mask to automatically apply masks over all of the reflections detected in the camera views.
Once all of the reflections have been masked or removed, click Continue to proceed to the wanding step.
Wanding
In the wanding stage, we will use the Calibration Wand to collect wanding samples that will be used for calibrating the system.
Set the Calibration Type to Full.
Under the Wand settings, specify the wand that you will be used to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly.
Click Start Wanding to start collecting the wanding sample.
Once the wanding process starts. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Gently draw figure-eight repetitively with the wand to collect samples at varying orientations and cover as much space as possible for sufficient sampling. Wanding trails will be shown in colors on the 2D View. A grid/table displaying the status of the wanding process will show up in the Calibration pane to monitor the progress.
As each camera collects the wanding samples, the camera grid representing the wanding status of each camera will start changing its color to bright green. This provides visual feedback on whether sufficient samples have been collected by each camera. Wave the wand until all boxes are filled with bright green color.
Once enough samples have been collected, press the Start Calculation button to start calibrating. The calculation may take a few minutes to complete.
When the calculation is finished, its results will get displayed. If the overall result is acceptable, click Continue to proceed to setting up the ground. If the result is not satisfactory, click Cancel and go through the wanding once more.
Wanding tips
For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.
During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.
Setting the Ground Plane
Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.
Place a Calibration Square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
Level the calibration square parallel to the ground plane.
At this point, the Calibration pane should detect which calibration square has been placed in the tracking volume. If not, you may want to specifically select the three markers on the calibration square from the 3D view in Motive.
Click Set Ground Plane to complete the calibration.
Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.
Motive Recordings
See Also: Motive Basics page.
Motive Profiles
Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.
Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Builder pane, go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
See Also: Markers page for marker types, or Rigid Body Tracking and Skeleton Tracking page for placement directions.
Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.
Create Rigid Body
To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the Builder pane to define the Rigid Body.
Create Skeleton
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the Builder pane and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Builder pane. In some cases, you may not need to select the markers if only the desired actor is in view.
See Also: Rigid Body Tracking page and Skeleton Tracking page.
Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Control Deck at the bottom, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout
General Editing Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.
Manual Label: Labeling individual markers manually using the Labeling, assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.
See Also: Labeling page.
Changing Marker Labels and Colors
When needed, you can use the Constraints pane to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Data pane. Multiple Takes can be selected and exported from Motive or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Graph View pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 4, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
See Also: Data Streaming page
This page is an introduction showing how to use OptiTrack cameras to set up an LED Wall for Virtual Production. This process is also called In-Camera Virtual Effects or InCam VFX. This is an industry technique used to simulate the background of a film set to make it seem as if the actor is in another location.
This tutorial requires Motive 2.3.x, Unreal Engine 4.27, and the Unreal Engine: OptiTrack Live Link Plugin.
This is a list of required hardware and what each portion is used for.
The OptiTrack system is used to track the camera, calibration checkerboard, (optional) LED Wall, and (optional) any other props or additional cameras. As far as OptiTrack hardware is concerned, you will need all of the typical hardware for a motion capture system plus an eSync2, BaseStation, CinePuck, Probe, and a few extra markers. Please refer to the Quick Start Guide for instructions on how to do this.
You will need one computer to drive Motive/OptiTrack and another to drive the Unreal Engine System.
Motive PC - The CPU is the most important component and should use the latest generation of processors.
Unreal Engine PC - Both the CPU and GPU is important. However, the GPU in particular needs to be top of the line to render the scene, for example a RTX 3080 Ti. Setups that involve multiple LED walls stitched together will require graphics cards that can synchronize with each other such as the NVIDIA A6000.
The Unreal Engine computer will also require an SDI input card with both SDI and genlock support. We used the BlackMagic Decklink SDI 4K and the BlackMagic Decklink 8K Pro in our testing, but other cards will work as well.
You will need a studio video camera with SDI out, timecode in, and genlock in support. Any studio camera with these BNC ports will work, and there are a lot of different options for different budgets. Here are some suggestions:
Sony PXW-FS7 (What we use internally)
Etc...
Cameras without these synchronization features can be used, but may look like they are stuttering due to frames not perfectly aligning.
A camera dolly or other type of mounting system will be needed to move and adjust the camera around your space, so that the movement looks smooth.
Your studio camera should have a cage around it in order to mount objects to the outside of it. You will need to rigidly mount your CinePuck to the outside. We used SmallRig NATO Rail and Clamps for the cage and Rigid Body mounting fixtures.
You’ll also need a variety of cables to connect from camera back to where the Computers are located. This includes things such as power cables, BNC cables, USB extension cables (optional for powering the CinePuck), etc... These will not all be listed here, since they will depend on the particular setup for your system.
Many systems will want a lens encoder in the mix. This is only necessary if you plan on zooming your lens in/out between shoots. We do not use this device in this example for simplicity.
In order to run your LED wall, you will need two things an LED Wall and a Video Processor.
For large walls composed of LED wall subsections you will need an additional video processor and an additional render PC for each wall as well as an SDI splitter. We are using a single LED wall for simplicity.
The LED Wall portion contains the grid of LED light, the power structure, and ways to connect the panels into a video controller, but does not contain the ability to send an HDMI signal to the wall.
We used Planar TVF 125 for our video wall, but there are many other options out there depending on your needs.
The video processor is responsible for taking an HDMI/Display Port/SDI signal and rendering it on the LED wall. It's also responsible for synchronizing the refresh rate of the LED wall with external sources.
The video processor we used for controlling the LED wall was the Color Light Z6. However, Brompton Technology video processors are a more typical film standard.
You will either need a timecode generator AND a genlock generator or a device that does both. Without these devices the exposure of your camera will not align with when the LED wall renders and you may see the LED wall rendering. These signals are used to synchronize Motive, the cinema camera, LED Walls, and any other devices together.
Timecode - The timecode signal should be fed into Motive and the Cinema camera. The SDI signal from the camera will plug into the SDI card, which will carry the timecode to the Unreal Engine computer as well.
Genlock - The genlock should be fed into Motive, the cinema camera, and the Video Processor(s).
Timecode is for frame alignment. It allows you to synchronize data in post by aligning the timecode values together. (However, it does not guarantee that the cameras expose and the LED wall renders at the same time). There are a variety of different manufactures that will work for timecode generators. Here are some suggestions:
Etc...
Genlock is for frame synchronization. It allows you to synchronize data in real-time by aligning the times when a camera exposes or an LED Wall renders its image. (However, it does not align frame numbers, so one system could be on frame 1 and another on frame 23.) There are a variety of different manufactures that will work for genlock generators. Here are some suggestions:
Etc...
Below is a diagram that shows what devices are connected to each other. Both Genlock and Timecode are connected via BNC ports on each device.
Plug the Genlock Generator into:
eSync2's Genlock-In BNC port
Any of the Video Processor's BNC ports
Studio Video Camera's Genlock port
Plug the TimeCode Generator into:
eSync2's Timecode-In BNC port
Studio Video Camera's TC IN BNC port
Plug the Studio Video Camera into:
Unreal Engine PC SDI IN port for Genlock via the SDI OUT port on the Studio Video Camera
Unreal Engine PC SDI IN port for Timecode via the SDI OUT port on the Studio Video Camera
A rigid board with a black and white checkerboard on it is needed to calibrate the lens characteristics. This object will likely be replaced in the future.
There are a lot of hardware devices required, so below is a rough list of required hardware as a checklist.
Truss or other mounting structure
Prime/PrimeX Cameras
Ethernet Cables
Network Switches
Calibration Wand
Calibration Square
Motive License
License Dongle
Computer (for Motive)
Network Card for the Computer
CinePuck
BaseStation (for CinePuck)
eSync2
BNC Cables (for eSync2)
Timecode Generator
Genlock Generator
Probe (optional)
Extra markers or trackable objects (optional)
Cinema/Broadcast Camera
Camera Lens
Camera Movement Device (ex. dolly, camera rails, etc...)
Camera Cage
Camera power cables
BNC Cables (for timecode, SDI, and Genlock)
USB C extension cable for powering the CinePuck (optional)
Lens Encoder (optional)
Truss or mounting system for the LED Wall
LED Wall
Video Processor
Cables to connect between the LED Wall and Video Processor
HDMI or other video cables to connect to Unreal PC
Computer (for Unreal Engine)
SDI Card for Cinema Camera input
Video splitters (optional)
Video recorder (for recording the camera's image)
Checkerboard for Unreal calibration process
Non-LED Wall based lighting (optional)
Next, we'll cover how to configure Motive for tracking.
We assume that you have already set up and calibrated Motive before starting this video. If you need help getting started with Motive, then please refer to our Getting Started wiki page.
After calibrating Motive, you'll want to set up your active hardware. This requires a BaseStation and a CinePuck.
Plug the BaseStation into a Power over Ethernet (PoE) switch just like any other camera.
CinePuck
Firmly attach the CinePuck to your Studio Camera using your SmallRig NATO Rail and Clamps on the cage of the camera.
The CinePuck can be mounted anywhere on the camera, but for best results put the puck closer to the lens.
Turn on your CinePuck, and let it calibrate the IMU bias by waiting until the flashing red and orange lights turn into flashing green lights.
It is recommended to power the CinePuck using a USB connection for the duration of filming a scene to avoid running out of battery power; a light should turn on the CinePuck when the power is connected.
Change the tracking mode to Active + Passive.
Create a Rigid Body out of the CinePuck markers.
For active markers, turning up the residual will usually improve tracking.
Go through a refinement process in the Builder pane to get the highest quality Rigid Body.
Show advanced settings for that Rigid Body, then input the Active Tag ID and Active RF (radio frequency) Channel for your CinePuck.
If you don’t have this information, then consult the IMU tag instructions found here Active Marker Tracking: IMU Setup .
If you input the IMU properties incorrectly or it is not successfully connecting to the BaseStation, then your Rigid Body will turn red. If you input the IMU properties correctly and it successfully connects to the BaseStation, then it will turn orange and need to go through a calibration process. Please refer to the table below for more detailed information.
You will need to move the Rigid Body around in each axis until it turns back to the original color. At this point you are tracking with both the optical marker data and the IMU data through a process called sensor fusion. This takes the best aspects of both the optical motion capture data and the IMU data to make a tracking solution better than when using either individually. As an option, you may now turn the minimum markers for your Rigid Body down to 1 or even 0 for difficult tracking situations.
Viewport
When the color of Rigid Body is the same as the assigned Rigid Body color, it indicates Motive is connected to the IMU and receiving data.
If the color is orange, it indicate the IMU is attempting to calibrate. Slowly rotate the object until the IMU finishes calibrating.
If the color is red, it indicates the Rigid Body is configured for receiving IMU data, but no data is coming through the designated RF channel. Make sure Active Tag ID and RF channel values mat the configuration on the active Tag/Puck.
Description
After Motive is configured, we'll need to setup the LED Wall and Calibration Board as trackable objects. This is not strictly necessary for the LED Wall, but will make setup easier later and make setting the ground plane correctly unimportant.
Before configuring the LED Wall and Calibration Board, you'll first want to create a probe Rigid Body. The probe can be used to measure locations in the volume using the calibrated position of the metal tip. For more information for using the probe measurement tool, please feel free to visit our wiki page Measurement Probe Kit Guide.\
Place four to six markers on the LED Wall without covering the LEDs on the Wall.
Use the probe to sample the corners of the LED Wall.
You will need to make a simple plane geometry that is the size of your LED wall using your favorite 3D editing tool such as Blender or Maya. (A sample plane comes with the Unreal Engine Live Link plugin if you need a starting place.)
If the plane does not perfectly align with the probe points, then you will need to use the gizmo tool to align the geometry. If you need help setting up or using the Gizmo tool please visit our other wiki page Gizmo Tool: Translate, Rotate, and Scale Gizmo.
Any changes you make to the geometry will need to be on the Rigid Body position and not the geometry offset.
You can make these adjustments using the Builder pane, then zeroing the Attach Geometry offsets in the Properties pane.
Place four to six markers without covering the checkered pattern.
Use probe to sample the bottom left vertex of the grid.
Use the gizmo tool to orient the Rigid Body pivot and place pivot in the sampled location.
Next, you'll need to make sure that your eSync is configured correctly.
If not already done, plug your genlock and timecode signals into the appropriately labeled eSync input ports.
Select the eSync in the Devices pane.
In the Properties pane, check to see that your timecode and genlock signals are coming in correctly at the bottom.
Then, set the Source to Video Genlock In, and set the Input Multiplier to a value of 4 if your genlock is at 30 Hz or 5 if your genlock is at a rate of roughly 24 Hz.
Your cameras should stop tracking for a few seconds, then the rate in the Devices pane should update if you are configured correctly.
Make sure to turn on Streaming in Motive, then you are all done with the Motive setup.
Start Unreal Engine and choose the default project under the “Film, Television, and Live Events” section called “InCamera VFX”
Before we get started verify that the following plugins are enabled:
Camera Calibration (Epic Games, Inc.)
OpenCV Lens Distortion (Epic Games, Inc.)
OptiTrack - LiveLink (OptiTrack)
Media Player Plugin for your capture card (For example, Blackmagic Media Player)
Media Foundation Media Player
WMF Media Player
Many of these will be already enabled.
The main setup process consists of four general steps:
1. Setup the video media data.
2. Setup FIZ and Live Link Sources
3. Track and calibrate the camera in Unreal Engine
4. Setup nDisplay
Right click in the Content Browser Panel > Media > Media Bundle and name the Media Bundle something appropriate.
Double click the Media Bundle you just created to open the properties for that object.
Set the Media Source to the Blackmagic Media Source, the Configuration to the resolution and frame rate of the camera, and set the Timecode Format to LTC (Linear Timecode).
Drag this Media Bundle object into the scene and you’ll see your video appear on a plane.
You’ll also need to create two other video sources doing roughly the same steps as above.
Right click in the Content Browser Panel > Media > Blackmagic Media Source.
Open it, then set the configuration and timecode options.
Right click in the Content Browser Panel > Media > Media Profile.
Click Configure Now, then Configure.
Under Media Sources set one of the sources to Blackmagic Media Source, then set the correct configuration and timecode properties.
Before we set up timecode and genlock, it’s best to have a few visual metrics visible to validate that things are working.
In the Viewport click the triangle dropdown > Show FPS and also click the triangle dropdown > Stat > Engine > Timecode.
This will show timecode and genlock metrics in the 3D view.
If not already open you’ll probably want the Window > Developer Tools > Timecode Provider and Window > Developer Tools > Genlock panels open for debugging.
You should notice that your timecode and genlock is noticeably incorrect which will be corrected in later steps below.
The timecode will probably just be the current time.
To create a timecode blueprint, right click in the Content Browser Panel > Blueprint > BlackmagicTimecodeProvider and name the blueprint something like “BM_Timecode”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Timecode > Timecode Provider = “BM_Timecode”.
At this point your timecode metrics should look correct.
Right click in the Content Browser Panel > Blueprint > BlackmagicCustomTimeStep and name the blueprint something like “BM_Genlock”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Framerate > Custom TimeStep = “BM_Genlock”.
Your genlock pane should be reporting correctly, and the FPS should be roughly your genlock rate.
Debugging Note: Sometimes you may need to close then restart the MediaBundle in your scene to get the video image to work.
Shortcut: There is a shortcut for setting up the basic Focus Iris Zoom file and the basic lens file. In the Content Browser pane you can click View Option and Show Plugin Content, navigate to the OptiTrackLiveLink folder, then copy the contents of this folder into your main content folder. Doing this will save you a lot of steps, but we will cover how to make these files manually as well.
We need to make a blueprint responsible for controlling our lens data.
Right click the Content Browser > Live Link > Blueprint Virtual Subject, then select the LiveLinkCameraRole in the dropdown.
Name this file something like “FIZ_Data”.
Open the blueprint. Create two new objects called Update Virtual Subject Static Data and Update Virtual Subject Frame Data.
Connect the Static Data one to Event on Initialize and the Frame Data one to Event on Update.
Right click on the blue Static Data and Frame Data pins and Split Struct Pin.
In the Update Virtual Subject Static Data object:
Disable Location Supported and Rotation Support, then Enable the Focus Distance Supported, Aperture Supported, and Focal Length Supported options.
Create three new float variables called Zoom, Iris, and Focus.
Drag them into the Event Graph and select Get to allow those variables to be accessed in the blueprint.
Connect Zoom to Frame Data Focal Length, connect Iris to Frame Data Aperture, and connect Focus to Frame Data Focus Distance.
Compile your blueprint.
Select your variables and set the default value to the lens characteristics you will be using.
For our setup we had used:
Zoom is 20 mm, Iris is f/2.8 , and the Focus is 260 cm.
Compile and save your FIZ blueprint.
Both Focus and Iris graphs should create an elongated "S" shape based on the two data points provided for each above.
To create a lens file right click the Content Browser > Miscellaneous > Lens File, then name the file appropriately.
Double click the lens file to open it.
Switch to the Lens File Panel.
Click the Focus parameter.
Right click in the graph area and choose Add Data Point, click Input Focus and enter 10, then enter 10 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 1000 and 1000.
Click the Iris parameter.
Right click in the graph area and choose Add Data Point.
Click Input Iris and enter 1.4, then enter 1.4 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 22 and 22.
Save your lens file.
The above process is to set up the valid ranges for our lens focus and iris data. If you use a lens encoder, then this data will be controlled by the input from that device.
In the Window > Live Link pane, click the + Source icon, then Add Virtual Subject.
Choose the FIZ_Data object that we created above in the FIZ Data section of this OptiTrack Wiki page and add it.
Click the + Source icon, navigate to the OptiTrack source, and click Create.
Click Presets and create a new preset.
Edit > Project Settings and Search for Live Link and set the preset that you just created as the Default Live Link Preset.
You may want to restart your project at this point to verify that the live link pane auto-populates on startup correctly. Sometimes you need to set this preset twice to get it to work.
From the Place Actors window create an Empty Actor this will act as a camera parent.
Add it to the nDisplay_InCamVFX_Config object.
Create another actor object and make it a child of the camera parent actor.
Zero out the location of the camera parent actor from the Details pane under Transform.
For out setup, in the image to the right, we have labeled the empty actor “Cine_Parent” and its child object “CineCameraActor1” .
Select the default “CineCameraActor1” object in the World Outliner pane.
In the Details pane there should be a total of two LiveLinkComponentControllers.
You can add a new one by using the + Add Component button.
For our setup we have labeled one live link controller “Lens” and the other “OptiTrack”.
Click Subject Representation and choose the Rigid Body associated with your camera.
Click Subject Representation and choose the virtual camera. Then go to “Lens” Live Link Controller then navigate to Role Controllers > Camera Role > Camera Calibration > Lens File Picker and select the lens file you created. This process allows your camera to be tracked and associates the lens data with the camera you will be using.
Select the Place Actors window to create an Empty Actor and add it to the nDisplay_InCamVFX_Config object.
Zero out the location of this actor.
In our setup we have named out Empty Actor "Checkerboard_Parent"
From the Place Actors window also create a “Camera Calibration Checkerboard” actor for validating our camera lens information later.
Make it a child of the “Checkerboard” actor from before.
Configure the Num Corner Row and Num Corner Cols.
These values should be one less than the number of black/white squares on your calibration board. For example, if your calibration board has 9 rows of alternating black and white squares and 13 columns across of black and white squares, you would input 8 in the Num Corner Row field and 12 in the Num Corner Cols field.
Also input the Square Side Length which is the measurement of a single square (black or white).
Set the Odd Cube Materials and Even Cube Materials to solid colors to make it more visible.
Select "Checkerboard_Parent" and + Add Component of a Live Link Controller.
Add the checkerboard Rigid Body from Motive as the Subject Representation.
At this point your checkerboard should be tracking in Unreal Engine.
Double click the "Lens" file from earlier and go to the Calibration Steps tab and the Lens Information section.
On the right, select your Media Source.
Set the Lens Model Name and Serial Number to some relevant values based on what physical lens you are using for your camera.
The Sensor Dimensions is the trickiest portion to get correct here.
This is the physical size of the image sensor on your camera in millimeters.
You will need to consult the documentation for your particular camera to find this information.
For example, the Sony FS7 is 1920x1080 which we'd input X = 22.78 mm and Y = 12.817 mm for the Sensor Dimensions.
The lens information will calculate the intrinsic values of the lens you are using.
Choose the Lens Distortion Checkerboard algorithm and choose the checkerboard object you created above.
The Transparency slider can be adjusted between showing the camera image, 3D scene, or a mix of both. Show at least some of the raw camera image for this step.
Place the checkerboard in the view of the camera, then click in the 2D view to take a sample of the calibration board.
You will want to give the algorithm a variety of samples mostly around the edge of the image.
You will also want to get some samples of the calibration board at two different distances. One closer to the camera and one closer to where you will be capturing video.
Taking samples can be a bit of an art form.
You will want somewhere around 15 samples.
Once you are done click Add to Lens Distortion Calibration.
With an OptiTrack system you are looking for a RMS Reprojection Error of around 0.1 at the end. Slightly higher values can be acceptable as well, but will be less accurate.
The Nodal Offset tab will calculate the extrinsics or the position of the camera relative to the OptiTrack Rigid Body.
Select the Nodal Offset Checkerboard algorithm and your checkerboard from above.
Take samples similar to the Lens Distortion section.
You will want somewhere around 5 samples.
Click Apply to Camera Parent.
This will modify the position of the “Cine_Parent" actor created above.
Set the Transparency to 0.5.
This will allow you to see both the direct feed from the camera and the 3D overlay at the same time. As long as your calibration board is correctly set up in the 3D scene, then you can verify that the 3D object perfectly overlays on the 2D studio camera image.
In the World Outliner, Right click the Edit nDisplay_InCameraVFX_Config button. This will load the controls for configuring nDisplay.
For larger setups, you will configure a display per section of the LED wall. For smaller setups, you can delete additional sections (VP_1, VP_2, and VP_3) accordingly from the 3D view and the Cluster pane.
For a single display:
Select VP_0 and in the Details pane set the Region > W and H properties to the resolution of your LED display.
Do the same for Node_0 (Master).
Select VP_0 and load the plane mesh we created to display the LED wall in Motive.
An example file for the plane mesh can be found in the Contents folder of the OptiTrack Live Link Plugin. This file defines the physical dimensions of the LED wall.
Select the "ICVFXCamera" actor, then choose your camera object under In-Camera VFX > Cine Camera Actor.
Compile and save this blueprint.
Click Export to save out the nDisplay configuration file. (This file is what you will be asked for in the future in an application called Switchboard, so save it somewhere easy to find.)
Go back to your main Unreal Engine window and click on the nDisplay object.
Click + Add Component and add a Live Link Controller.
Set the Subject Representation to the Rigid Body for your LED Wall in Motive and set the Component to Control to “SM_Screen_0”.
At this point your LED Wall should be tracked in the scene, but none of the rendering will look correct yet.
To validate that this was all setup correctly you can turn off Evaluate Live Link for your CineCamera and move it so that it is in front of the nDisplay LED Wall.
Make sure to re-enable Evaluate Live Link afterwards.
The next step would be to add whatever reference scene you want to use for your LED Wall Virtual Production shoot. For example, we just duplicated a few of the color calibrators (see image to the right) included with the sample project, so that we have some objects to visualize in the scene.
If you haven’t already you will need to go to File > Save All at this point. Ideally, you should save frequently during the whole process to make sure you don’t lose your data.
Click the double arrows above the 3D Viewport >> and choose Switchboard > Launch Switchboard Listener. This launches an application that listens for a signal from Switchboard to start your experience.
Click the double arrows above the 3D Viewport >> and choose Launch Switchboard.
If this is your first time doing this, then there will be a small installer that runs in the command window.
A popup window will appear.
Click the Browse button next to the uProject option and navigate to your project file (.uproject).
Then click Ok and the Switchboard application will launch.
In Switchboard click Add Device, choose nDisplay, click Browse and choose the nDisplay configuration file (.ndisplay) that you created previously.
In Settings, verify that the correct project, directories and nDisplay are being referenced.
Click the power plug icon to Connect all devices.
Make sure to save and close your Unreal Engine project.
Click the up arrow button to Start All Connected Devices.
The image on the LED wall should look different when you point the camera at it, since it is calculating for the distortion and position of the lens. From the view of the camera it should almost look like you are looking through a window where the LED wall is located.
You might notice that the edge of the camera’s view is a hard edge. You can fix this and expand the field of view slightly to account for small amounts of lag by going back to your Unreal Engine project into the nDisplay object.
Select the "ICVFXCamera" object in the Components pane.
In the Details pane set the Field of View Multiplier to a value of about 1.2 to account for any latency, then set the Soft Edge > Top and Bottom and Sides properties to around .25 to blur the edges.
From an outside perspective, the final product will look like a static image that updates based on where the camera is pointing. From the view of the cameras, it will essentially look like you are looking through a window to a different world.
In our example, we are just tracking a few simple objects. In real productions you’ll use high quality 3D assets and place objects in front of the LED wall that fit with the scene behind to create a more immersive experience, like seen in the image to the right. With large LED walls, the walls themselves provide the natural lighting needed to make the scene look realistic. With everything set up correctly, what you can do is only limited by your budget and imagination.
Below is a quick start guide for most Prime Color and Prime Color FS setups. This setup and settings optimize the Prime Color Camera systems and are strongly recommended for best performance. Please see our full pages for more in-depth information on each topic.
If you experience latency or camera drops, you may need to increase the specifications on certain components, especially if your setup includes larger Prime Color camera counts. Please reach out to our team, if you are experiencing any of these issues even after upgrading the following specifications above and setup below.
Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.
Please note that if your aggregation switch is PoE, you can plug your Prime Color Cameras directly into the aggregation switch. PoE injectors are optional and will only be required if your aggregation switch is not PoE.
Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.
For 1-2 Prime Color Cameras it is recommended to use 1Gbps network switch with 1Gbps uplink port and a 1Gpbs NIC or higher. For 3+ Prime Color Cameras it is required to use network switches with a 10Gbps uplink port in conjunction with a 10Gbps designated NIC and their appropriate drivers.
When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.
You'll want to remove as much bloatware from your PC in order to optimize your system and make sure minimal unnecessary background processes are running. Background process can take up valuable CPU resources from Motive and cause frame drops while running your camera system.
There are many external resources in order to remove unused apps and halt unnecessary background processes, so they will not be covered within the scope of this page.
As a general rule for all OptiTrack camera systems, you'll want to disable all Windows firewalls and either disable or remove any Antivirus software. If firewalls and Antivirus software is enabled, this will cause frame drops while running your camera system.
In order for Motive to run above other processes, you'll need to change the Priority of Motive.exe to High.
Right Click on the Motive shortcut from your Desktop
In the Target: text field enter the below path, this will allow Motive to run at High Priority that will persist from closing and reopening Motive.
C:\Windows\System32\cmd.exe /C start "" /high "C:\Program Files\OptiTrack\Motive\Motive.exe"
Please refrain from setting the priority to Realtime. If Realtime is selected, this can cause loss of input control (mouse, keyboard, etc.) since Windows can prioritize Motive above input processes.
If you're running a system with a CPU with a lower core count, you may need to disable Motive from running on a couple of cores. This will help stabilize the overall system and free up some cores for other Windows required processes.
From the Task Manager, navigate to the Details tab and right click on Motive.exe
Select Set Affinity
From this window, uncheck the cores you wish to disallow Motive.exe to run on.
Click OK
Please note that you should only ever disable 2 cores or less to insure Motive still runs smoothly.
We recommend that you start with only one core and work your way up to two if you're still experiencing frame drop issues with your camera system.
Windows IoT is a stripped down version of Windows OS. This can offer many benefits in terms of running a smooth system with very little 'extras' that come standard with more commercial versions of Windows. Windows IoT can aid further in terms of Prime Color Camera system performance.
Your Network Interface Card has a few settings that you'll need to change in order to optimize your system and reduce issues when capturing Prime Color Camera video.
To navigate to the camera network's NIC:
Open Windows Settings
Select Ethernet from the navigation sidebar
Under Related settings select Change adapter options
From the Network Connections pop up window, right click on your NIC and select Properites
Select the Configure... button and navigate to the Advanced tab
For the Speed and Duplex property, you'll want to change this to the highest throughput of your NIC. If you have a 10Gbps NIC, you'll want to make sure that 10Gbps Full Duplex is selected. This property allows the NIC to operate at it's full range. If this setting is not altered to Full, Windows has the tendency to throttle the NIC throughput causing a 10Gbps NIC to only be sending data at 2Gbps.
Interrupt Moderation allows the NIC to moderate interrupts. When there is a significant amount of data being uplinked to Motive, this can cause more interrupts to occur thus hindering the system performance. You'll want to Disable this property.
After the above properties have been applied, the NIC will need to go through a reboot process. This process is automatic, however, it will make it appear that your camera network is down for a few minutes. This is normal and once the NIC is rebooted, should begin to work as expected.
Although not recommended, you may use a laptop PC to run Prime Color Camera system. When using a laptop PC, you'll need to use an external network adapter for. The above settings will typically not apply to these types of adapters, so no properties will need to changed.
It is important to use a Thunderbolt port adapter with corresponding Thunderbolt ports on your laptop as opposed to a standard USB-C adapters/ports.
By default this value is set to 50, however, depending on the specifications of your particular system this value may need to be lower or can be raised higher so long as your system can handle the increased data output.
By default this value is set to full resolution of 1920 x 1080p. Typically you will not need to alter this setting.
This page provides instructions on how to set up and use the OptiTrack active marker solution.
Additional Note
This guide is for only. Third-party IR LEDs will not work with instructions provided on this page.
This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.
Motive version 2.0 or above is required.
This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.
For active components that were shipped prior to September 2017, please see the page for more information about the firmware compatibility.
The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the Base Station and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.
Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.
Sends out radio frequency signals for synchronizing the active markers.
Powered by PoE, connected via Ethernet cable.
Must be connected to one of the switches in the camera network.
Connects to a USB power source and illuminates the active LEDs.
Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.
Emits 850 nm IR light.
4 active LEDs in each bundle and up to two bundles can be connected to each Tag.
(8 Active LEDs (4(LEDs/set) x 2 set) per Tag)
Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA
An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.
Connects to one of the PoE switches within the camera network.
For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.
Do not place external electromagnetic or radiofrequency devices near the Base Station.
When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.
BaseStation LEDs
Note: Behavior of the LEDs on the base station is subject to be changed.
Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.
Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.
Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.
Connect two sets of active markers (4 LEDs in each set) into a Tag.
Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.
To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.
Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.
Puck Setup
Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.
Active Pattern Depth
Settings → Live Pipeline → Solver Tab with Default value = 12
This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.
Minimum Active Count
Settings → Live Pipeline → Solver Tab with Default value = 3
Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting.
Active Marker Color
Settings → Views → 3D Tab with Default color = blue
The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.
For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:
For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.
Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.
Duplicate active frame IDs
For the active label to properly work, it is important that each marker has a unique active ID. When there are markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.
In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.
When a trackable asset (e.g. Rigid Body) is defined using active markers, its active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.
Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.
If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.
This page includes all of the Motive tutorial video for visual learners.
Updated videos coming soon!
Changes video mode of the camera. For more information regarding camera video types, please see: .
When toggled on this shows the camera's field of view. This is particularly useful when upon setting up a camera volume.
When toggled on, this setting shows the frame delivery info for all the cameras in the system overlaid on the selected camera's
Solver Tab: [] Residual (mm)
Set the allowable value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.
Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from where the calculated centroid is no longer reliable.
First, go into the perspective view pane > select a marker, then go to the Camera Preview pane > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.
Connected cameras will be listed under the . This panel is where we can configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
When an object is selected in Motive, all of its related properties will be listed under the . For example, when a is selected in the 3D viewport, its corresponding will get listed in this pane, and we can view the settings and configure them as needed.
Likewise, this pane is also used to view the properties of the cameras and any other connected devices that are listed in the .
This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon from the main toolbar.
The top is where 3D data is shown in Motive. Here, you can view and analyze 3D data within a calibrated capture volume. This panel will be used during the live capture and also in the playback of recorded data. In the perspective viewport, you can select any objects in the capture volume, use the context menu to perform actions, or use the to view and modify the associated properties.
You can use the dropdown menu at the top-left corner to switch between different viewports, and you can also use the button at the top-right corner to split the viewport into multiple. If desired, an additional View pane can be open by opening up a Viewer pane under the or by clicking icons on the main toolbar.
The bottom viewport is the Cameras viewport. Here, you can monitor the view of each camera in the system and apply . This pane is also used to examine markers, or IR lights, seen by the cameras in order to examine how the 2D data is processed and reconstructed into 3D coordinates.
The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the or by clicking icon on the main toolbar.
Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the Data pane by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.
NIC drivers may need to be installed via disc or downloaded from the manufacture's support website. If you're unsure of where to find these drivers or how to install them, please reach out to our team.
If you're still experiencing issues with dropped frames even after altering the settings above, please reach out to our team for more information regarding Windows IoT.
In most cases your switch settings will not be required to be altered. However, if your switch has built in , you'll want to disable this feature.
It is recommended to close the during recording. This further stabilizes Motive minimizing lag and less frame drops.
Active tracking is supported only with the Ethernet camera system (Prime series or Slim 13E cameras). For instructions on how to set up a camera system see: .
For more information, please read through the page.
When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the .
With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the . To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.
Windows 10 or 11 Professional (64 Bit)
Designated 1Gbps NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 1Gbps uplink port
RAM: 16GB+ of memory
GPU: GTX 1050 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
Windows 10 or 11 Professional (64 Bit) Windows IoT (Contact Support)
Designated 10Gbps+ NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 10Gbps+ uplink port
RAM: 32GB+ of memory
GPU: RTX 2070 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
USB camera models, including Flex series cameras and V120:Duo/Trio tracking bars, are currently not supported in Motive 3.0.x versions. For those systems, please refer to the .
Choosing an appropriate camera mounting solution is very important when setting up a capture volume. A stable setup not only prevents camera damage from unexpected collisions, but it also maintains calibration quality throughout capture. All OptiTrack cameras have ¼-20 UNC Threaded holes – ¼ inch diameter, 20 threads/inch – which is the industry standard for mounting cameras. Before planning the mount structures, make sure that you have optimized your plans.
Due to thermal expansion issues when mounted to walls, we recommend using Trusses or Tripods as primary mounting structures.
Trusses will offer the most stability and are less prone to unwanted camera movement for more accurate tracking.
Tripods alternatively, offer more mobility to change the capture volume.
Wall Mounts and Speed Rails offer the ability to maximize space, but are the most susceptible to vibration from HVAC systems, thermal expansion, earthquake resistant buildings, etc. This vibration can cause inaccurate calibration and tracking.
Camera clamps are used to fasten cameras onto stable mounting structures, such as a truss system, wall mounts, speed rails, or large tripods. There are some considerations when choosing a clamp for each camera. Most importantly, the clamps need to be able to bear the camera weight. Also, we recommend using clamps that offer adjustment of all 3 degrees of orientation: pitch, yaw, and roll. The stability of your mounting structure and the placement of each camera is very important for the quality of the mocap data, and as such we recommend using one of the mounting structures suggested in this page.
Manfrotto clamps come in three parts:
Manfrotto 035 Super Clamp
Manfrotto 056 3-Way, Pan-and-Tilt Head with 1/4"-20 Mount
Reversible Short Brass Stud
For proper assembly, please follow the steps below:
Place the brass stud into the 16mm hexagon socket in the Manfrotto Super Clamp.
Depress the spring-loaded button so the brass stud will lock into place.
Tighten the safety pin mechanism to secure the brass stud within the hexagon socket. Be sure that the 3/8″ screw (larger) end of the stud is facing out.
From here, attach the Super Clamp to the 3-Way, Pan-and-Tilt Head by screwing in the brass stud into the screw hole of the 3-Way, Pan-and-Tilt Head.
Be sure to tighten these two components fairly tight as you don't want them to swivel when installing cameras. It helps to first tighten the 360° swivel on the 3-Way, Pan-and-Tilt Head as this will ensure that any unwanted swivel will not occur when tightening the two components together.
Once, these two components are attached you should have a fully functioning clamp to attach your cameras to.
Large scale mounting structures, such as trusses and wall mounts, are the most stable and can be used to reliably cover larger volumes. Cameras are well-fixed and the need for recalibration is reduced. However, they are not easily portable and cannot be easily adjusted. On the other hand, smaller mounting structures, such as tripods and C-clamps, are more portable, simple to setup, and can be easily adjusted if needed. However, they are less stable and more vulnerable to external impacts, which can distort the camera position and the calibration. Choosing your mounting structure depends on the capture environment, the size of the volume, and the purpose of capture. You can use a combination of both methods as needed for unique applications.
A truss system provides a sturdy structure and a customizable layout that can cover diverse capture volume sizes, ranging from a small volume to a very large volume. Cameras are mounted on the truss beam using the camera clamps.
Follow the truss installation instruction and assemble the trusses on-site, and use the fastening pins to secure each truss segment.
Fasten the base truss to the ground.
Connect each of the segments and fix them by inserting a fastening pin.
Attach clamps to the cameras.
Mount the clamps to the truss beam.
Tripods are portable and simple to install, and they are not restricted to the environment constraints. There are various sizes and types of tripods for different applications. In order to ensure its stability, each tripod needs to be installed on a hard surface (e.g. concrete). Usually, one camera is attached per tripod, but camera clamps can be used in combination to fasten multiple cameras along the leg as long as the tripod is stable enough to bear the weight. Note that tripod setups are less stable and vulnerable to physical impacts. Any camera movements after calibration will distort the calibration quality, and the volume will need to be re-calibrated.
Wall mounts and speed rails are used with camera clamps to mount the cameras along the wall of the capture volume. This setup is very stable, and it has a low chance of getting interfered with by way of physical contact. The capture volume size and layout will depend on the size of the room. However, note that the wall, or the building itself, may slightly fluctuate due to the changing ambient temperature throughout the day. Therefore, you may need to routinely re-calibrate the volume if you are looking for precise measurements.
Below are recommended steps when installing speed rails onto different types of wall material. However, depending on your space, you may require alternative methods.
Although we have instructions below for installing speed rails, we highly recommend leaving the installation to qualified contractors.
General Tools
Cordless drill
Socket driver bits for drill
Various drill bits
Hex head Allen wrench set
Laser level
Speed Rail Parts
Pre-cut rails
Internal locking splice
5" offset wall mount bracket
End caps (should already be pre-installed onto pipes)
Elbow speed rail bracket (optional)
Tee speed rail bracket (optional)
Wood Stud Setup
Wood frame studs behind drywall requires:
Pre-drilled holes.
2 1/2" long x 5/16" hex head wood lag screws.
Metal Stud Framing Setup
Metal stud framing behind drywall requires:
Undersized pre-drilled holes as a marker in the drywall.
2"long x 5/16" self tapping metal screws with hex head.
Metal studs can strip easily if pre-drilled hole is too large.
Concrete Block/Wall Setup
Requires:
Pre-drilled holes.
Concrete anchors inserted into pre-drilled hole.
2 1/2" concrete lags.
Concrete anchors and lags must match for a proper fit.
It's easiest and safest to install with another person rather than installing by a single person and especially necessary when rails have been pre-inserted into brackets prior to installing on a wall.
Pre-drill bracket locations.
If working in a smaller space, slip speed rails into brackets prior to installing.
Install all brackets by the top lag first.
Check to see if all are correctly spaced and level.
Install bottom lags.
Slip speed rails into brackets.
Set screw and internal locking splice of speed rail.
Attach clamps to the cameras.
Attach the clamps to the rail.
Helpful Tips/Additional Information
The 5" offset wall brackets should not exceed 4' between each bracket.
Speed rails are shipped no longer than 8'.
Using blue painter's tape is a simple way to mark placement without messing up paint.
Make sure to slide the end of the speed rail without the end cap in first. If installed with the end-cap end first it will "mushroom" the end and make it difficult to slip brackets onto the speed rail.
Check brackets for any burs/sharpness and gently sand off to avoid the bracket scratching the finish on the speed rail.
To further reduce the bracket scratching the finish on the speed rail, use a piece of paper inside the bracket prior to sliding the speed rail through.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left.
If you wish to change the location and orientation of the global axis, you can use the ground plane tools from the and use a Rigid Body or a calibration square to set the global origin.
Adjustig the Coordinate System Steps
[Motive] Open the Ground Planes page.
[Motive] Click Set Set Ground Plane button, and the global origin will be adjusted.
Before setting up a motion capture system, choose a suitable setup area and prepare it in order to achieve the best tracking performance. This page highlights some of the considerations to make when preparing the setup area for general tracking applications. Note that this page provides just general recommendations and these could vary depending on the size of a system or purpose of the capture.
First of all, pick a place to set up the capture volume.
Setup Area Size
System setup area depends on the size of the mocap system and how the cameras are positioned. To get a general idea, check out the feature on our website.
Make sure there is plenty of room for setting up the cameras. It is usually beneficial to have extra space in case the system setup needs to be altered. Also, pick an area where there is enough vertical spacing as well. Setting up the cameras at a high elevation is beneficial because it gives wider lines of sight for the cameras, providing a better coverage of the capture volume.
Minimal Foot Traffic
After camera system calibration, the system should remain unaltered in order to maintain the calibration quality. Physical contacts on cameras could change the setup, requiring it to be re-calibrated. To prevent such cases, pick a space where there is only minimal foot traffic.
Flooring
Avoid reflective flooring. The IR lights from the cameras could be reflected by it and interfere with tracking. If this is inevitable, consider covering the floor with surface mats to prevent the reflections.
Avoid flexible or deformable flooring; such flooring can negatively impact your system's calibration.
For the best tracking performance, minimize ambient light interference within the setup area. The motion capture cameras track the markers by detecting reflected infrared light and any extraneous IR lights that exist within the capture volume could interfere with the tracking.
Sunlight: Block any open windows that might let sunlight in. Sunlight contains wavelength within the IR spectrum and could interfere with the cameras.
IR Light sources: Remove any unnecessary lights in IR wavelength range from the capture volume. IR lights could be emitted from sources such as incandescent, halogen, and high-pressure sodium lights or any other IR based devices.
Dark-colored objects absorb most of the visible light, however, it does not mean that they absorb the IR lights as well. Therefore, the color of the material is not a good way of determining whether an object will be visible within the IR spectrum. Some materials will look dark to human eyes but appear bright white on the IR cameras. If these items are placed within the tracking volume, they could introduce extraneous reconstructions.
Since you already have the IR cameras in hand, use one of the cameras to check whether there are IR white materials within the volume. If there are, move them out of the volume or cover them up.
Remove any unnecessary obstacles out of the capture volume since they could block cameras' view and prevent them from tracking the markers. Leave only the items that are necessary for the capture.
Remove reflective objects nearby or within the setup area since IR illumination from the cameras could be reflected by them. You can also use non-reflective tapes to cover the reflective parts.
Prime 41 and Prime 17W cameras are equipped with powerful IR LED rings which enables tracking outdoors, even under the presence of some extraneous IR lights. The strong illumination from the Prime 41 cameras allows a mocap system to better distinguish marker reflections from extraneous illuminations. System settings and camera placements may need to be adjusted for outdoor tracking applications.
When enabled, the Broadcast Storm Control feature on the NETGEAR ProSafe GSM7228S may interfere with the synchronization mechanism used by OptiTrack Ethernet cameras. For proper system operations, the Strom Control features must be disabled for all of the ports used in this aggregator switch.
Step 1. Access the IPv4 settings on the network card that the camera network is connected to.
On windows, open the Network and Sharing Center and access Change adaptor settings.
Right-click on the adapter that the network switch is connected to and access its properties.
Among the list of items, select the Internet Protocol Version 4 (TCP/IPv4) and access its properties by clicking the Properties button.
Step 2. Make a note of the IP address settings for the network card connected to the switch.
Step 3. Change the IP address of the network card connected to the switch to 169.254.100.200. As shown below.
Step 4. Open windows explorer, and access 169.254.100.100
Step 5. Log into the switch with Username 'admin', and leave Password blank
Step 6. Navigate to Security->Traffic Control->Storm Control->Storm Control Global Configuration
Step 7. Ensure that all storm control options are disabled
Step 8. Navigate to Maintenance->Save Config->Save Configuration
Step 9. Check the 'Save Configuration' check box
Step 10. Log out of the switch, or just close the browser window
Step 11. Restore the IP address settings noted in Step 2 for the network card connected to the switch
This page provides the general specifications for an OptiTrack camera setup. Please see our and pages for more detailed instructions on how to setup your Ethernet camera system.
An Ethernet camera system networks via Ethernet cables. Ethernet-based camera models include PrimeX series (PrimeX 13, 13W, 22, 41), SlimX 13, and Prime Color models. Ethernet cables not only offer faster data transfer rates, but they also provide power over Ethernet to each camera while transferring the data to the host PC. This reduces the number of required cables and simplifies the overall setup. Furthermore, Ethernet cables have much longer length capability (up to 100m), allowing the systems to cover large volumes.
Host PC with an isolated network (PCI/e NIC)
Ethernet Cameras
Ethernet cables
Ethernet PoE/PoE+ Switches
Uplink switch (for large camera count setup)
The eSync (optional for synchronizations)
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, Cat6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above — are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for high data traffic.
Note
10Gb uplink switches, NICs, and cables are recommended for large camera counts or high data cameras like the Prime Color cameras. Typically 1Gb switches, NICs, and cables should be enough to accommodate smaller and moderately sized systems. If you're unsure of whether or not you need more than 1Gb, please contact one of our Sales Engineers or see our page for more information.
Electromagnetic Shielding
We recommend using only cables that have electromagnetic interference shielding. If unshielded cables are used, cables in close proximity to each other have the potential to create data transfer interference and cause cameras to stall in Motive.
Unshielded cables do not protect the cameras from Electrostatic Discharge (ESD), which can damage the camera. Do not use unshielded cables in environments where ESD exposure is a risk.
Our current general standard for network switches are:
PoE ports with at least 1GB of data transfer for each port.
If you have a switch that is not purchased from OptiTrack, these are not supported by our support team.
Here at OptiTrack, we recommend and provide Manfrotto clamps that have been tested and verified to ensure a solid hold on cameras and mounting structures. If you would like more information regarding Manfrotto clamps, please visit our page on our website or reach out to our .
Choosing an appropriate structure is critical in preparing the capture volume, and we recommend our customers consult our for planning a layout for the camera mount setup.
Consult with the truss system provider or our for setting up the truss system.
each camera.
each camera.
When using the Duo/Trio tracking bars, you can set the coordinate origin at the desired location and orientation using either a Rigid Body or a as a reference point. Using a calibration square will allow you to set the origin more accurately. You can also use a custom calibration square to set this.
First set place the calibration square at the desired origin. If you are using a Rigid Body, its position and orientation will be used as the reference.
[Motive] Open the .
[Motive] Select the type of calibration square that will be used as a reference to set the global origin. Set it to Auto if you are using a calibration square from us. If you are using a Rigid Body, select the Rigid Body option from the drop-down menu. If you are using a , you will need to set the vertical offset also.
[Motive] Select the Calibration square markers or the Rigid Body markers from the
All cameras are equipped with IR filters, so extraneous lights outside of the infrared spectrum (e.g. fluorescent lights) will not interfere with the cameras. IR lights that cannot be removed or blocked from the setup area can be masked in Motive using the during the system calibration. However, this feature completely discards image data within the masked regions and an overuse of it could negatively impact tracking. Thus, it is best to physically remove the object whenever possible.
Please read through the page for more information.
A power budget that is able to support the desired amount of cameras. If the desired amount of cameras exceeds the power budget of a single switch, additional switches may be used. Please see the section below for more information.
For specific brands/models of switches, please .
For the most part, the switches provided by OptiTrack are ready to go without any need for additional settings or configurations. If you're having issues with setting up your switches provided by OptiTrack please see the Cabling and Load Balancing section below or contact our .
A: 2D frame drops are logged under the and it can also be seen in the . It will be indicated with a warning sign () next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should occur only momentarily. If the system continues to drop 2D frames, it means there is a problem with receiving the camera data. In many cases, this occurs due to networking problems.
To narrow down the issue, you would want to disable the and check if the frames are still dropping. If it stops, the problem is associated with either software configurations or CPU processing. If it continues to drop, then the problem could be narrowed down to the network configuration, which may be resolved by doing the following:
This page provides instructions on how to configure the CameraNicFilter.xml file to whitelist or blacklist specific cameras from the connected camera network.
Starting with Motive 2.1, you can specify which cameras to utilize among the connected Ethernet cameras in a system. This can be done by setting up an XML file (CameraNicFilter.xml) and placing it in Motive's ProgramData directory: C:\ProgramData\OptiTrack\Motive\CameraNicFilter.xml. Once this is set, Motive will initialize only the specified cameras within the respective network interface. This allows users to distribute the cameras to specific network interfaces on a computer or on multiple computers.
Additional Note:
This filter works with Ethernet camera systems only. USB camera systems are not supported.
At the time of writing, the eSync is NOT supported. In other words, the eSync cannot be present in the system in order for the filter to work properly.
For common applications, there is usually no need to separate the cameras to different network interfaces. However, there are few situations where you may want to use this filter to segregate the cameras. Below are some of the sample applications of the filters:
Multiple Prime Color cameras
When there are multiple Prime Color cameras in a setup, you can configure the filter to spread out the data load. In other words, you can uplink color camera data through a separate network interface (NIC) and distribute the data traffic to prevent any bandwidth bottleneck. To accomplish this, multiple NICs must be present on the host computer, and you can distribute the data and uplink them onto different interfaces.
Active marker tracking on multiple capture volumes
For active marker tracking, this filter can be used to distribute the cameras to different host computers. By doing so, you can segregate the cameras into multiple capture volumes and have them share the same connected BaseStation. This could be beneficial for VR applications especially if you plan on having multiple volumes to calibrate because you can use the same active components between different volumes.
To separate the cameras, you will need to use a text editor to create an XML file named CameraNicfilter.xml. In this XML file, you will specify which cameras to whitelist or blacklist within the connected network interface. Please note that it is very important for the XML file to match the expected format; for this reason, we strongly recommend to first copy-and-paste the template and start from there.
Attached below is a basic template of the CameraNicFilter.xml file. On each NIC element, you can specify each network interface using the IPAddress attribute, and then in its child elements, you can specifically set which cameras to whitelist or blacklist using their serial numbers.
For each network interface that you will be using to communicate with the cameras, you will need to create a <NIC> element and assign a network IP address (IPv4) to its IPAddress attribute. Then, under each NIC element, you can specify which cameras to use or not to use.
Please make sure correct IP addresses are assigned when configuring the NIC element. Run the ipconfig command on the windows command prompt to list out the assigned IP addresses of the available networks on the computer and then use the IPv4 address of the network that you wish to use. When necessary, you can also set a static IP address for the network interface and use a known address value for easier setup.
Under the NIC element, define two child elements: <Whitelist> and <Blacklist>. In each element, you will be specifying the cameras using their serial numbers. Within each network interface, only the cameras listed under the <Whitelist> element will be used and all of the cameras under <Blacklist> will be ignored.
As shown in the above template, you can specify which cameras to whitelist or blacklist using the corresponding camera serial numbers. For example, you can use the following to specify the camera (M18883) <Serial>M18883</Serial>
. You can also use a partial serial number as a wildcard to specify all cameras with the matching serial number. For example, if you wish to blacklist all Color cameras in a network (192.168.1.3), you can use C as the wildcard serial number since the serial number of all color cameras start with C.
Once the XMl file is configured, please save the file in the ProgramData directory: C:\ProgramData\OptiTrack\Motive\CameraNicFilter.xml
. If everything is set up properly, only the whitelisted cameras under each network interface will get initialized in Motive, and the data from only the specified cameras will be uplinked through the respective network interface.
In optical motion capture systems, proper camera placement is very important in order to efficiently utilize the captured images from each camera. Before setting up the cameras, it is good idea to plan ahead and create a blueprint of the camera placement layout. This page highlights the key aspects and tips for efficient camera placements.
A well-arranged camera placement can significantly improve the tracking quality. When tracking markers, 3D coordinates are reconstructed from the 2D views seen by each camera in the system. More specifically, correlated 2D marker positions are triangulated to compute the 3D position of each marker. Thus, having multiple distinct vantages on the target volume is beneficial because it allows wider angles for the triangulation algorithm, which in turn improves the tracking quality. Accordingly, an efficient camera arrangement should have cameras distributed appropriately around the capture volume. By doing so, not only the tracking accuracy will be improved, but uncorrelated rays and marker occlusions will also be prevented. Depending on the type of tracking application, capture volume environment, and the size of a mocap system, proper camera placement layouts may vary.
An ideal camera placement varies depending on the capture application. In order to figure out the best placements for a specific application, a clear understanding of the fundamentals of optical motion capture is necessary.
To calculate 3D marker locations, tracked markers must be simultaneously captured by at least two synchronized cameras in the system. When not enough cameras are capturing the 2D positions, the 3D marker will not be present in the captured data. As a result, the collected marker trajectory will have gaps, and the accuracy of the capture will be reduced. Furthermore, extra effort and time will be required for post-processing the data. Thus, marker visibility throughout the capture is very important for tracking quality, and cameras need to be capturing at diverse vantages so that marker occlusions are minimized.
Depending on captured motion types and volume settings, the instructions for ideal camera arrangement vary. For applications that require tracking markers at low heights, it would be beneficial to have some cameras placed and aimed at low elevations. For applications tracking markers placed strictly on the front of the subject, cameras on the rear won't see those and as a result, become unnecessary. For large volume setups, installing cameras circumnavigating the volume at the highest elevation will maximize camera coverage and the capture volume size. For captures valuing extreme accuracy, it is better to place cameras close to the object so that cameras capture more pixels per marker and more accurately track small changes in their position.
Again, the optimal camera arrangement depends on the purpose and features of the capture application. Plan the camera placement specific to the capture application so that the capability of the provided system is fully utilized. Please contact us if you need consulting with figuring out the optimal camera arrangement.
For common applications of tracking 3D position and orientation of Skeletons and Rigid Bodies, place the cameras on the periphery of the capture volume. This setup typically maximizes the camera overlap and minimizes wasted camera coverage. General tips include the following:
Mount cameras at the desired maximum height of the capture volume.
Distribute the cameras equidistantly around the setup area.
Adjust angles of cameras and aim them towards the target volume.
For cameras with rectangular FOVs, mount the cameras in landscape orientation. In very small setup areas, cameras can be aimed in portrait orientation to increase vertical coverage, but this typically reduces camera overlap, which can reduce marker continuity and data quality.
TIP: For capture setups involving large camera counts, it is useful to separate the capture volume into two or more sections. This reduces amount of computation load for the software.
Around the volume
For common applications tracking a Skeleton or a Rigid Body to obtain the 6 Degrees of Freedom (x,y,z-position and orientation) data, it is beneficial to arrange the cameras around the periphery of the capture volume for tracking markers both in front and back of the subject.
Camera Elevations
For typical motion capture setup, placing cameras at high elevations is recommended. Doing so maximizes the capture coverage in the volume, and also minimizes the chance of subjects bumping into the truss structure which can degrade calibration. Furthermore, when cameras are placed at low elevations and aimed across from one another, the synchronized IR illuminations from each camera will be detected, and will need to be masked from the 2D view.
However, it can be beneficial to place cameras at varying elevations. Doing so will provide more diverse viewing angles from both high and low elevations and can significantly increase the coverage of the volume. The frequency of marker occlusions will be reduced, and the accuracy of detecting the marker elevations will be improved.
Camera to Camera Distance
Separating every camera by a consistent distance is recommended. When cameras are placed in close vicinity, they capture similar images on the tracked subject, and the extra image will not contribute to preventing occlusions nor the reconstruction calculations. This overlap detracts from the benefit of a higher camera count and also doubles the computational load for the calibration process. Moreover, this also increases the chance of marker occlusions because markers will be blocked from multiple views simultaneously whenever obstacles are introduced.
Camera to Object Distance
An ideal distance between a camera and the captured subject also depends on the purpose of the capture. A long distance between the camera and the object gives more camera coverage for larger volume setups. On the other hand, capturing at a short distance will have less camera coverage but the tracking measurements will be more accurate. The cameras lens focus ring may need to be adjusted for close-up tracking applications.
This page includes information on the status indicator lights on the OptiTrack Ethernet cameras.
The PrimeX Series cameras have a front mounted status ring light to indicate the state of the Motive software and firmware updates on the cameras. The following table lists the default ring light color associated with the state of Motive.
Status Ring Light Colors
Off
Powered & Awaiting Connection
When camera is first plugged in the LED ring light will be off until it receives commands from Motive and has successfully authenticated via the security key. If it is not successful in connecting to the network, but receiving power it will remain off with a small flashing white dot light in the bottom left corner.
No
Slow Flashing Cyan, no IR
Idle
Powered and connected to network, but Motive is not running. Two dashes in the bottom left corner will be present in lieu of ID number.
No
Cyan
Live
Actively sending data and receiving commands when loaded into Motive.
Yes
White/Off
Masking
When a marker, or what a camera perceives as a marker, is visible to a camera when masking in the Calibration pane, the status light will turn white. When masks are applied and no erroneous marker data is seen, the LEDs turn off and the volume is ready to wand.
No
Solid Green
Recording
Camera is sending data to be written to memory or disk.
Yes
Variable Green
Sampling During Calibration
Camera starts out black, then green will appear on the ring light depending on where you have wanded relative to that camera.
When the camera starts to take samples, there will be a white light that follows the wand movement rotating around the LED.
This will fill in dark green and then light green when enough samples are taken.
No
Flashing White
Calibration
During calibration when cameras have collected sufficient data they will turn green. Once enough cameras have collected enough samples the left over cameras will flash white indicating they still need to collect more samples for a successful calibration.
No
None
Playback
Camera is operating but Motive is in Edit Mode.
Yes
Yellow
Selected
Camera is selected in Motive.
Yes
Red
Reference
Camera is in reference mode. Instead of capturing the marker data, the camera is recording reference video, Greyscale and MJPEG
Yes
Cycle Red
Firmware Reset
On board flash memory is being reset.
No
Cycle Cyan
Firmware Update
For PrimeX cameras. Firmware is being written to flash. On completion, color turns off and camera reboots.
No
Cycle Yellow
Firmware Update
For Prime cameras. Firmware is being written to flash. On completion, color turns off and camera reboots.
No
On every PrimeX camera there is an additional display in the bottom left corner of the face of the camera.
Bottom Left Display Values
Cycling Numbers
Camera is in the process of updating the firmware. The numbers will start at 0 and increase to 100 indicating that the firmware has completed 100% of the update.
Constant Number
This is the number of the camera as assigned by Motive. Every time Motive is closed and reopened or a camera is removed from the system, the number will update accordingly.
'E'
If an 'E' error code appears in the display this means that the camera has lost connection to the network. To troubleshoot this, start by unplugging the camera and plugging it back into the camera switch. Alternatively, you may also try restarting the entire switch to reset the entire network.
If for any reason you need to change the status ring light you can do so by going into Settings and under General click on the color box next to the status you would like to change. This will bring up a color picker window where you can choose a solid color or choose mutli-color to oscillate between colors. You also have the ability to save a color to your color library to apply it to other statuses.
In order to disable the aim assist button LED on the back of PrimeX cameras, you simply toggle them off in the General settings. You can find this under Aim Assist > Aiming Button LED.
The PrimeX Series cameras also have a status indicator on the back panel and indicate the state of the camera only. When changing to a new version of Motive, the camera will need a firmware update in order to communicate to the new version. Firmware updates are automatic when starting Motives. If the camera's firmware updates to a new version of Motive, running an older version of Motive will cause the firmware to necessarily revert back to an older version of firmware. This process is automatic as well.
Back Ring Light Colors
Green
Initialize Phase 1
Camera is powered and boot loader is running. Preparing to run main firmware.
Yellow
Initialize Phase 2
Firmware is running and switch communication in progress.
Blinking Green (Slow)
Initialize Phase 3
Switch communication established and awaiting an IP address.
Cyan
Firmware Loading
Host has initiated firmware upload process.
Blinking Yellow
Initialize Phase 4
Camera has fully initialized. In process of synchronizing with camera group or eSync.
Blinking Green (Fast)
Running
Camera is fully operational and synchronized to the camera group. Ready for data capture.
Blue
Hibernating
Camera is in a low power state and not sending data. Occurs after closing Motive but leaving the cameras connected to the switch.
Alternating Red
Firmware Reset
On board flash memory is being reset.
Alternating Yellow
Firmware Update
Firmware is being written to flash. Numeric display in front will show progress. On completion, the light turns green and camera reboots.
When changing versions of Motive, a firmware update is needed. This process is automatic when opening the software and the status ring light and back ring light show the state, as described in the table above, of the camera during this process. The camera should not be unplugged during a firmware reset or firmware update. Give the camera time to finish this process before turning off the software.
If a camera doesn't update its firmware with the rest of the cameras, it will not get loaded into Motive. Wait for all cameras that are updating to finish, then restart Motive. The cameras that failed to update will now update. This could be caused by miscommunication between the switch when loading in numerous cameras.
Blue
Actively sending data and receiving commands when loaded into Motive.
Green
Camera is sending data to be written to memory or disk.
None
Camera is operating but Motive is in Edit Mode.
Yellow
Camera is selected in Motive.
Orange
Camera is in reference mode. Instead of capturing the marker data, the camera is recording reference video, MJPEG
Blinking red on start up
Firmware update is in progress, which is normal. Firmware will be updated when a new version of Motive is installed on the computer.
If the LED blinks in red a few times about 15 seconds after the camera start-up, it means that the camera has failed to establish a connection with the PoE switch. When this happens, error sign, E or E1, will be shown on the numeric display.
Yellow on start up
The camera is attempting to establish a link with the PoE switch.
Like PrimeX series cameras, SlimX 13 cameras also have a status indicator on the back panel and indicate the state of the camera.
Back Ring Light Colors
Green
Initialize Phase 1
Camera is powered and boot loader is running. Preparing to run main firmware.
Yellow
Initialize Phase 2
Firmware is running and switch communication in progress.
Blinking Green (Slow)
Initialize Phase 3
Switch communication established and awaiting an IP address.
Cyan
Firmware Loading
Host has initiated firmware upload process.
Blinking Yellow
Initialize Phase 4
Camera has fully initialized. In process of synchronizing with camera group or eSync2.
Blinking Green (Fast)
Running
Camera is fully operational and synchronized to the camera group. Ready for data capture.
Blue
Hibernating
Camera is in a low power state and not sending data. Occurs after closing Motive but leaving the cameras connected to the switch.
Alternating Red
Firmware Reset
On board flash memory is being reset.
Alternating Yellow
Firmware Update
Firmware is being written to flash. Numeric display in front will show progress. On completion, the light turns green and camera reboots.
Notes on USB camera models
USB camera models, including Flex series cameras and V120:Duo/Trio tracking bars, are currently not supported in Motive 3.x versions. For those systems, please refer to the .
In order to ensure that every camera in a mocap system takes full advantage of its capability, they need to be focused and aimed at the target tracking volume. This page includes detailed instructions on how to adjust the focus and aim of each camera for an optimal motion capture. OptiTrack cameras are focused at infinity by default, which is generally sufficient for common tracking applications. However, we recommend users to always double-check the camera view and make sure the captured images are focused when first setting up the system. Obtaining best quality image is very important as 3D data is derived from the captured images.
Make sure that the camera placement is appropriate for your application.
Pick a camera to adjust the aim and focus.
Set the camera to the raw grayscale video mode (in Motive) and increase the camera exposure to capture the brightest image (These steps are accomplished by the Aim Assist Button on featured cameras).
Place one or more reflective markers in the tracking volume.
Carefully adjust the camera angle while monitoring the Camera Preview so that the desired capture volume is included within the camera coverage.
Within the Camera Preview in Motive, zoom in on one of the markers so that it fills the frame.
Adjust the focus (detailed instruction given below) so that the captured image is resolved as clearly as possible.
Repeat above steps for other cameras in the system.
Adjusting aim with a single person can be difficult because the user will have to run back and forth from the camera and the host PC in order to adjust the camera angle and monitor the 2D view at the same time. OptiTrack cameras featuring the Aim Assist button (Prime series and Flex 13) makes this aiming process easier. With just one button-click, the user can set the camera to the grayscale mode and the exposure value to its optimal setting for adjusting both aim and focus. Fit the capture volume within the vertical and horizontal range shown by the virtual crosshairs that appear when Aim Assist mode is on. With this feature, the single-user no longer needs to go back to the host PC to choose cameras and change their settings. Settings for Aim Assist buttons are available from Application Settings pane.
After all the cameras are placed at correct locations, they need to be properly aimed in order to fully utilize its capture coverage. In general, all cameras need to be aimed at the target capture volume where markers will be tracked. While cameras are still attached to the mounting structure, carefully adjust the camera clamp so that the camera field of view (FOV) is directed at the capture region. Refer to 2D camera views from the Camera Preview pane, and ensure that each camera view covers the desired capture region.
All OptiTrack cameras (except the V120:Duo/Trio tracking bars) can be re-focused to optimize image clarity at any distance within the tracking range. Change the camera mode to raw grayscale mode and adjust the camera setting, increase exposure and LED setting, to capture the brightest image. Zoom onto one of the reflective markers in the capture volume and check clarity of the image. Then, adjust the camera focus and find the point where the marker image is best resolved. The following images show some examples.
Auto-zoom using Aim Assist button
Double-click on the aim assist button to have the software automatically zoom into a single marker near the center of the camera view. This makes the focusing process easier to accomplish for a single person.
PrimeX 41 and PrimeX 22
For PrimeX 41 and 22 models, camera focus can be adjusted by rotating the focus ring on the lens body, which can be accessed at the center of the camera. The front ring on the lens changes the focus of the camera, and the rear ring adjusts the F-stop of the lens. In most cases, it is beneficial to set the f-stop low to have the aperture at its maximum size for capturing the brightest image. Carefully rotate the focus ring while monitoring the 2D grayscale camera view for image clarity. Once the focus and f-stop have been optimized on the lens, it should be locked down by tightening the set screw. In default configuration, PrimeX 41 cameras are equipped with 12mm F#1.8 lens, and the PrimeX 22 cameras are equipped with 6.8mm F#1.6 lens.
Prime 17W and 41*
For Prime 17W and 41 models, camera focus can be adjusted by rotating the focus ring on the lens body, which can be accessed at the center of the camera. The front ring on the Prime 41 lens changes the focus of the camera, while the rear ring on the Prime 17W adjusts its focus . Set the aperture at its maximum size in order to capture the brightest image. For the Prime 41, the aperture ring is located at the rear of the lens body, where the Prime 17W aperture ring is located at the front. Carefully rotate the focus ring while monitoring the 2D grayscale camera view for image clarity. Align the mark with the infinity symbol when setting the focus back to infinity. Once the focus has been optimized, it should be locked down by tightening the set screw.
*Legacy camera models
PrimeX 13 and 13W, and Prime 13* and 13W*
PrimeX 13 and PrimeX 13W use M12 lenses and cameras can be focused using custom focus tools to rotate the lens body. Focusing tools can be purchased on OptiTrack’s Lens Accessories page, and they clip onto the camera lens and rotates it without opening the camera housing. It could be beneficial to lower the LED illumination to minimize reflections from the adjusting hand.
*Legacy camera models
Slim Series
SlimX 13 cameras also feature M12 lenses. The camera focus can be easily adjusted by rotating the lens without the need to remove the housing. Slim cameras support multiple lens types, including third-party lenses so focus techniques will vary. Refer to the lens type to determine how to proceed. (In general, M12 lenses will be focused by rotating the lens body, while C and CS lenses will be focused by rotating the focus ring).
Below are a couple of diagrams to properly setup your network. These setups are strongly advised and have been tested for optimal use and safety.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
On smaller systems you may not need to use the SFP ports to uplink your data. The SFP port on the switch with the SFP module provided by OptiTrack are specific for heavily loaded systems (i.e. larger camera counts, Prime Color Camera systems).
In the event that SFP ports are NOT used, please use one of the standard Ethernet ports on your switch to uplink data to Motive. If you're unsure if you'll require to use the SFP port and SFP module, please reach out to either our Sales or Support teams.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic. A 10GB uplink and NIC are recommended for multi-switch setups or when using Prime Color cameras.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the cameras to drop frames in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
OptiTrack’s Ethernet cameras require PoE or PoE+ Gigabit Ethernet switches, depending on the camera's power requirement. The switch serves two functions: transfer camera data to a host PC, and supply power to each camera over the Ethernet cable (PoE).
The switch must provide consistent power to every port simultaneously in order to power each camera. Standard PoE switches must provide a full 15.4 watts to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color cameras have stronger IR strobes which require higher power for the maximum performance.
In this case, these cameras need to be routed through PoE+ switches that provide a full 30 watts of power to each port simultaneously. Note that PoE Midspan devices or power injectors are not suitable for Ethernet camera systems.
The following is generally used for large PoE+ camera setups with multiple camera switches. Please refer to the Switch Power Budget and Camera Power Requirements tab above for more information.
Some switches are only allotted a power budget smaller than what is needed depending on which OptiTrack cameras are being used. For larger camera setups this can cause multiple switches that can only use a portion of their available ports. In this case, we recommend an Redundant Power System (RPS) to extend the power budget of your switch. For example, a 24-port switch may have a 370W power budget which only supports 12 PoE+ cameras that require 30W to power. If, however, you have the same 24-port switch with a RPS, you can now power all 24 PoE+ cameras with a 30W power requirement utilizing all 24 of the PoE ports on the switch.
The eSync is used to enable synchronization and timecode in Ethernet-based mocap systems. Only one device is needed per system, and it enables you to link the system to almost any signal source. It has multiple synchronization ports which allow integrating external signals from other devices. When an eSync is used, it is considered as the master in the synchronization chain.
With large camera system setups, you should connect the eSync onto the aggregator switch via a standard Ethernet port for more stable camera synchronization. If PoE is not supported on the aggregator switch, the sync hub will need to be powered separately from a power outlet.
At this point, all of the connected cameras will be listed on the Devices pane and the 3D viewport when you start up Motive. Check to make sure all of the connected cameras are properly listed in Motive.
Then, open up the Status Log panel and check there are no 2D frame drops. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, it indicates there is a problem with how the system is delivering the camera data. Please refer to the troubleshooting section for more details.
During Calibration process, a calibration square is used to define global coordinate axes as well as the ground plane for the capture volume. Each calibration square has different vertical offset value. When defining the ground plane, Motive will recognize the square and ask user whether to change the value to the matching offset.
CS-200:
Long arm: Positive z
Short arm: Positive x
Vertical offset: 19 mm
Marker size: 14 mm (diameter)
CS-400: Used for general for common mocap applications. Contains knobs for adjusting the balance as well as slots for aligning with a force plate.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 45 mm
Marker size: 19 mm (diameter)
Legacy L-frame square: Legacy calibration square designed before changing to the Right-hand coordinate system.
Long arm: Positive z
Short arm: Negative x
Custom Calibration square: Position three markers in your volume in the shape of a typical calibration square (creating a ~90 degree angle with one arm longer than the other). Then select the markers to set the ground plane.
Long arm: Positive z
Short arm: Negative x
When creating a custom ground plane, you can use Motive to help you move the markers to create approximately 90 degree between the 3 markers. This is of course contingent on how good your calibration is, however, this will still give you a fairly accurate starting point when setting your ground plane.
For Motive 1.7 or higher, Right-Handed Coordinate System is used as the standard, across internal and exported formats and data streams. As a result, Motive 1.7 now interprets the L-Frame differently than previous releases:
This page covers basic types of trackable assets in Motive. The assets in Motive are used for both tracking of the objects and labeling of 3D markers in Motive, and they are managed under the Assets pane which can be opened by clicking on the icon. Each type of asset is further explained in the related pages.
Once Motive is prepared, the next step is to place markers on the subject and create corresponding assets. There are three different types of assets in Motive:
Marker Set
Rigid Body
Skeleton
For each Take, involved assets are displayed in the Assets pane, and the related properties show up at the Properties pane when an asset is selected within Motive.
The Marker Set is a list of marker labels that are used to annotate reconstructed markers. Marker Sets should only be used in situations where it is not possible to define a Rigid Body or Skeleton. In this case, the user will manually label markers in post-processing. When doing so, having a defined set of labels (Marker Set) makes this process much easier. Marker Sets within a Take will be listed in the Labels pane, and each label can be assigned through the Labeling process.
Rigid body and Skeleton assets are the Tracking Models. Rigid bodies are created for tracking rigid objects, and Skeleton assets are created for tracking human motions. These assets automatically apply a set of predefined labels to reconstructed trajectories using Motive's tracking and labeling algorithms, and Motive uses the labeled markers to calculate the position and orientation of the Rigid Body or Skeleton Segment. Both Rigid Body and Skeleton tracking data can be sent to other pipelines (e.g. animations and biomechanics) for extended applications. If new Skeletons or Rigid Bodies are created during post-processing, the take will need to be reconstructed and auto-labeled in order to apply the changes to the 3D data.
Assets may be created during both Live (before capture) or Post (after capture, from a loaded TAK) captures.
The Assets pane lists out all assets that are available in the current capture. You can easily copy these assets onto other recorded Take(s) or to the live capture by doing the following:
Copying Assets to a Recorded _Take_
In order to copy and paste assets onto another Take, right-click on the desired Take to bring up the context menu and choose Copy Assets to Takes. This will bring up a dialog window for selecting which assets to move.
Copying Assets to Multiple Recorded _Take(s)_
If you wish to copy assets to multiple Takes, select multiple takes from the Data pane until the desired takes are all highlighted. Repeat the steps you took above for copying a single Take by right-clicking on any of the selected Takes. This should copy the assets you selected to all the selected Takes in the Data pane.
Copying Assets from a Recorded _Take_** to the Live Capture**
If you have a list of assets in a Take that you wish to import into the live capture, you can simply do this by right-clicking on the desired assets on the Assets pane, and selecting Copy Assets to Live.
For selecting multiple items, use Shift-click or Ctrl-click.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
You'll want to remove as much bloatware from your PC in order to optimize your system and make sure minimal unnecessary background processes are running. Background process can take up valuable CPU resources from Motive and cause frame drops while running your camera system.
There are many external resources in order to remove unused apps and halt unnecessary background processes, so they will not be covered within the scope of this page.
As a general rule for all OptiTrack camera systems, you'll want to disable all Windows firewalls and either disable or remove any Antivirus software. If firewalls and Antivirus software is enabled, this will cause frame drops while running your camera system.
In order for Motive to run above other processes, you'll need to change the Priority of Motive.exe to High.
Right Click on the Motive shortcut from your Desktop
In the Target: text field enter the below path, this will allow Motive to run at High Priority that will persist from closing and reopening Motive.
C:\Windows\System32\cmd.exe /C start "" /high "C:\Program Files\OptiTrack\Motive\Motive.exe"
Please refrain from setting the priority to Realtime. If Realtime is selected, this can cause loss of input control (mouse, keyboard, etc.) since Windows can prioritize Motive above input processes.
If you're running a system with a CPU with a lower core count, you may need to disable Motive from running on a couple of cores. This will help stabilize the overall system and free up some cores for other Windows required processes.
From the Task Manager, navigate to the Details tab and right click on Motive.exe
Select Set Affinity
From this window, uncheck the cores you wish to disallow Motive.exe to run on.
Click OK
Please note that you should only ever disable 2 cores or less to insure Motive still runs smoothly.
We recommend that you start with only one core and work your way up to two if you're still experiencing frame drop issues with your camera system.
The settings below are generally for larger camera setups and Prime Color camera setups. Typically, smaller systems will not need to use the settings below. When in doubt, please reach out to our Support team.
In most cases your switch settings will not be required to be altered. However, if your switch has built in Storm Control, you'll want to disable this feature.
Your Network Interface Card has a few settings that can change in order to optimize your system.
To navigate to the camera network's NIC:
Open Windows Settings
Select Ethernet from the navigation sidebar
Under Related settings select Change adapter options
From the Network Connections pop up window, right click on your NIC and select Properites
Select the Configure... button and navigate to the Advanced tab
For the Speed and Duplex property, you'll want to change this to the highest throughput of your NIC. If you have a 10Gbps NIC, you'll want to make sure that 10Gbps Full Duplex is selected. This property allows the NIC to operate at it's full range. If this setting is not altered to Full, Windows has the tendency to throttle the NIC throughput causing a 10Gbps NIC to only be sending data at 2Gbps.
Interrupt Moderation allows the NIC to moderate interrupts. When there is a significant amount of data being uplinked to Motive, this can cause more interrupts to occur thus hindering the system performance. You'll want to Disable this property.
After the above properties have been applied, the NIC will need to go through a reboot process. This process is automatic, however, it will make it appear that your camera network is down for a few minutes. This is normal and once the NIC is rebooted, should begin to work as expected.
Although not recommended, you may use a laptop PC to run a larger or Prime Color Camera system. When using a laptop PC, you'll need to use an external network adapter for. The above settings will typically not apply to these types of adapters, so no properties will need to changed.
It is important to use a Thunderbolt port adapter with corresponding Thunderbolt ports on your laptop as opposed to a standard USB-C adapters/ports.
OptiTrack motion capture systems can use both passive and active markers as indicators for 3D position and orientation. An appropriate marker setup is essential for both tracking the quality and reliability of captured data. All markers must be properly placed and must remain securely attached to surfaces throughout the capture. If any markers are taken off or moved, they will become unlabeled from the Marker Set and will stop contributing to the tracking of the attached object. In addition to marker placements, marker counts and specifications (sizes, circularity, and reflectivity) also influence the tracking quality. Passive (retroreflective) markers need to have well-maintained retroreflective surfaces in order to fully reflect the IR light back to the camera. Active (LED) markers must be properly configured and synchronized with the system.
OptiTrack cameras track any surfaces covered with retroreflective material, which is designed to reflect incoming light back to its source. IR light emitted from the camera is reflected by passive markers and detected by the camera’s sensor. Then, the captured reflections are used to calculate the 2D marker position, which is used by Motive to compute 3D position through reconstruction. Depending on which markers are used (size, shape, etc.) you may want to adjust the camera filter parameters from the Live Pipeline settings in Application Settings.
The size of markers affects visibility. Larger markers stand out in the camera view and can be tracked at longer distances, but they are less suitable for tracking fine movements or small objects. In contrast, smaller markers are beneficial for precise tracking (e.g. facial tracking and microvolume tracking), but have difficulty being tracked at long distances or in restricted settings and are more likely to be occluded during capture. Choose appropriate marker sizes to optimize the tracking for different applications.
If you wish to track non-spherical retroreflective surfaces, lower the Circularity value in 2D object filter in the application settings. This adjusts the circle filter threshold and non-circular reflections can also be considered as markers. However, keep in mind that this will lower the filtering threshold for extraneous reflections as well. If you wish to track non-spherical retroreflective surfaces, lower the Circularity value from the cameras tab in the application settings.
All markers need to have a well-maintained retroreflective surface. Every marker must satisfy the brightness Threshold defined from the camera properties to be recognized in Motive. Worn markers with damaged retroreflective surfaces will appear to a dimmer image in the camera view, and the tracking may be limited.
Pixel Inspector: You can analyze the brightness of pixels in each camera view by using the pixel inspector, which can be enabled from the Application Settings.
Please contact our Sales team to decide which markers will suit your needs.
OptiTrack cameras can track any surface covered with retro-reflective material. For best results, markers should be completely spherical with a smooth and clean surface. Hemispherical or flat markers (e.g. retro-reflective tape on a flat surface) can be tracked effectively from straight on, but when viewed from an angle, they will produce a less accurate centroid calculation. Hence, non-spherical markers will have a less trackable range of motion when compared to tracking fully spherical markers.
OptiTrack's active solution provides advanced tracking of IR LED markers to accomplish the best tracking results. This allows each marker to be labeled individually. Please refer to the Active Marker Tracking page for more information.
Active (LED) markers can also be tracked with OptiTrack cameras when properly configured. We recommend using OptiTrack’s Ultra Wide Angle 850nm LEDs for active LED tracking applications. If third-party LEDs are used, their illumination wavelength should be at 850nm for best results. Otherwise, light from the LED will be filtered by the band-pass filter.
If your application requires tracking LEDs outside of the 850nm wavelength, the OptiTrack camera should not be equipped with the 850nm band-pass filter, as it will cut off any illumination above or below the 850nm wavelength. An alternative solution is to use the 700nm short-pass filter (for passing illumination in the visible spectrum) and the 800nm long-pass filter (for passing illumination in the IR spectrum). If the camera is not equipped with the filter, the Filter Switcher add-on is available for purchase at our webstore. There are also other important considerations when incorporating active markers in Motive:
Place a spherical diffuser around each LED marker to increase the illumination angle. This will improve the tracking since bare LED bulbs have limited illumination angles due to their narrow beamwidth. Even with wide-angle LEDs, the lighting coverage of bare LED bulbs will be insufficient for the cameras to track the markers at an angle.
If an LED-based marker system will be strobed (to increase range, offset groups of LEDs, etc.), it is important to synchronize their strobes with the camera system. If you require a LED synchronization solution, please contact one of our Sales Engineers to learn more about OptiTrack’s RF-based LED synchronizer.
Many applications that require active LEDs for tracking (e.g. very large setups with long distances from a camera to a marker) will also require active LEDs during calibration to ensure sufficient overlap in-camera samples during the wanding process. We recommend using OptiTrack’s Wireless Active LED Calibration Wand for best results in these types of applications. Please contact one of our Sales Engineers to order this calibration accessory.
Proper marker placement is vital for quality of motion capture data because each marker on a tracked subject is used as indicators for both position and orientation. When an asset (a Rigid Body or Skeleton) is created in Motive, its unique spatial relationships of the markers are calibrated and recorded. Then, the recorded information is used to recognize the markers in the corresponding asset during the auto-labeling process. For best tracking results, when multiple subjects with a similar shape are involved in the capture, it is necessary to offset their marker placements to introduce the asymmetry and avoid the congruency.
Read more about marker placements from the Rigid Body Tracking page and the Skeleton Tracking page.
Asymmetry
Asymmetry is the key to avoiding the congruency for tracking multiple Marker Sets. When there are more than one similar marker arrangements in the volume, marker labels may be confused. Thus, it is beneficial to place segment makers — joint markers must always be placed on anatomical landmarks — in asymmetrical positions for similar Rigid Bodies and Skeletal segments. This provides a clear distinction between two similar arrangements. Furthermore, avoid placing markers in a symmetrical shape within the segment as well. For example, a perfect square marker arrangement will have ambiguous orientation and frequent mislabels may occur throughout the capture. Instead, follow the rule of thumb of placing the less critical markers in asymmetrical arrangements.
Prepare the markers and attach them on the subject, a Rigid Body or a person. Minimize extraneous reflections by covering shiny surfaces with non-reflective tapes. Then, securely attach the markers to the subject using enough adhesives suitable for the surface. There are various types of adhesives and marker bases available on our webstore for attaching the marker: Acrylic, Rubber, Skin adhesive, and Velcro. Multiple types of marker bases are also available: carbon fiber filled bases, Velcro bases, and snap-on plastic bases.
This page provides instructions on how to utilize the Gizmo tool for modifying asset definitions (Rigid Bodies and Skeletons) on the of Motive
Edit Mode: As of Motive 3.0, asset editing can only be performed in
Solved Data: In order to edit asset definitions from a recorded Take, corresponding must be removed before making the edit, and then recalculated.
The gizmo tools allow users to make modifications on reconstructed 3D markers, Rigid Bodies, or Skeletons for both real-time and post-processing of tracking data. This page provides instructions on how to utilize the gizmo tools.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Precise Position/Orientation: When translating or rotating the Rigid Body, you can CTRL + select a 3D reconstruction from the scene to precisely position the pivot point, or align a coordinate axis, directly on, or towards, the selected marker. Multiple reconstructions can be also be selected and their geometrical center (midpoint) will be used as the target reference.
You can utilize the gizmo tools to modify skeleton bone lengths, joint orientations, or scale the spacing of the markers. Translating and rotating the skeleton assets will change how skeleton bone is positioned and oriented with respect to the tracked markers, and thus, any changes in the skeleton definition will affect the realistic representation of the human movement.
The scale tool modifies the size of selected skeleton segments.
The gizmo tools can also be used to edit positions of reconstructed markers.In order to do this, you must be working reconstructed 3D data in post-processing. In live-tracking or 2D mode doing live-reconstruction, marker positions are reconstructed frame-by-frame and it cannot be modified. The Edit Assets must be disabled to do this (Hotkey: T).
Translate
Using the translate tool, 3D positions of reconstructed markers can be modified. Simply click on the markers, turn on the translate tool (Hotkey: W), and move the markers.
Rotate
Using the rotate tool, 3D positions of a group of markers can be rotated at its center. Simply select a group of markers, turn on the rotate tool (Hotkey: E), and rotate them.\
Scale
Using the scale tool, 3D spacing of a group of makers can be scaled. Simply select a group of markers, turn on the scale tool (Hotkey: R) and scale their spacing.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
In Motive, Skeleton assets are used for tracking human motions. These assets auto-label specific sets of markers attached to human subjects, or actors, and create skeletal models. Unlike Rigid Body assets, Skeleton assets require additional calculations to correctly identify and label 3D reconstructed markers on multiple semi-Rigid Body segments. In order to accomplish this, Motive uses pre-defined Skeleton Marker Set templates, which is a collection of marker labels and their specific positions on a subject. According to the selected Marker Set, retroreflective markers must be placed on pre-designated locations of the body. This page details instructions on how to create and use Skeleton assets in Motive.
Note:
Motive license: Skeleton features are supported only in Motive:Body or Motive:Body - Unlimited.
Skeleton Count: Standard Motive:Body license supports up to 3 Skeletons. For tracking higher number of Skeletons, activate with Motive: Body - Unlimitted license.
Height requirement: For Skeleton tracking, the subject must be between 1'7" ~ 9' 10" tall.
Use the default create layout to open related panels that are necessary for Skeleton creation. (CTRL + 2).
When it comes to tracking human movements, a proper marker placement becomes especially important. Motive utilizes pre-programmed Skeleton Marker Sets, and each marker is used to indicate anatomical landmarks when modeling the Skeleton. Thus, all of the markers must be placed at their appropriate locations. If any of markers are misplaced, the Skeleton asset may not be created, and even if it is created, bad marker placements may lead to problems. Thus, taking extra care in placing the markers on intended locations is very important and can save time in post-processing of the data.
Attaching markers directly onto a person’s skin can be difficult because of hair, oil, and moisture from sweat. Plus, dynamic human motions tend to move the markers during capture, so use appropriate skin adhesives for securing marker bases onto the skin. Alternatively, mocap suits allow velcro marker bases to be used.
Open and go to the Skeleton creation feature. Select the Marker Set you desire to use from the drop-down menu. A total number of required markers for each Skeleton is indicated in the parenthesis after each Marker Set name, and corresponding marker locations are displayed over an avatar displayed in the . Instruct the subject to strike a calibration pose (T-pose or A-pose) and carefully follow the figure and place retroreflective markers at corresponding locations of the actor or the subject.
Joint Markers
Joint markers need to be placed carefully along corresponding joint axes. Proper placements will minimize marker movements during a range of motions and will give better tracking results. To accomplish this, ask the subject to flex and extend the joint (e.g. knee) a few times and palpate the joint to locate the corresponding axis. Once the axis is located, attach the markers along the axis where skin movement is minimal during a range of motion.
Wipe off any moisture or oil on the skin before attaching the marker.
Avoid wearing clothing or shoes with reflective materials since they can introduce extraneous reflections.
Tie back hair which can occlude the markers around the neck.
Remove reflective jewelry.
Place markers in an asymmetrical arrangement by offsetting the related segment markers (markers that are not on joints) at slightly different height.
Additional Tips
All markers need to be placed at the respective anatomical landmarks.
Place markers where you can palpate the bone or where there is less soft tissue in between. These spots have fewer skin movements and provide secure marker attachment.
Joint markers are vulnerable to skin movements because of the range of motion in the flexion and extension cycle. In order to minimize the influence, a thorough understanding of the biomechanical model used in the post-processing is necessary. In certain circumstances, the joint line may not be the most appropriate location. Instead, placing the markers slightly superior to the joint line could minimize the soft tissue artifact, still taking care to maintain parallelism with the anatomical joint line.
Use appropriate adhesives to place markers and make sure they are securely attached.
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Next step is to select the Skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the Skeleton. This is set to the T-pose by default.
Step 7.
Step 8.
Click Create to create the Skeleton. Once the Skeleton model has been defined, confirm all Skeleton segments and assigned markers are located at expected locations. If any of the Skeleton segment seems to be misaligned, delete and create the Skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
Reset Skeleton Tracking
When Skeleton tracking is not acquired successfully during the capture for some reason, you can use the CTRL + R hotkey to trigger the solver to re-boot the Skeleton asset.
A proper calibration posture is necessary because the pose of the created Skeleton will be calibrated from it. Read through the following explanations on proper T-poses and A-poses.
T pose
The T-pose is commonly used as the reference pose in 3D animation to bind two characters or assets together. Motive uses this pose when creating Skeletons. A proper T-pose requires straight posture with back straight and head looking directly forward. Both arms are stretched to each side, forming a “T” shape. Both arms and legs must be straight, and both feet need to be aligned parallel to each other.
A pose
Palms Down: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing downwards.
Palms forward: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing forward. Be careful not to over rotate the arm.
Elbows Bent: Similar to all other A-poses. arms approximately 40 degrees, bend elbows so that forearms point towards the front. Palms facing downwards, both forearms aligned.
Calibration markers exists only in the biomechanics Marker Sets.
Many Skeleton Marker Sets do not have medial markers because they can easily collide with other body parts or interfere with the range of motion, all of which increase the chance of marker occlusions.
However, medial markers are beneficial for precisely locating joint axes by associating two markers on the medial and lateral side of a joint. For this reason, some biomechanics Marker Sets use medial markers as calibration markers. Calibration markers are used only when creating Skeletons but removed afterward for the actual capture. These calibration markers are highlighted in red from the 3D view when a Skeleton is first created.
Existing Skeleton assets can be recalibrated using the existing Skeleton information. Basically, the recalibration recreates the selected Skeleton using the same Skeleton Marker Set. This feature recalibrates the Skeleton asset and refreshes expected marker locations on the assets.
Skeleton recalibration does not work with Skeleton templates with added markers.
Skeleton Marker Sets can be modified slightly by adding or removing markers to or from the template. Follow the below steps for adding/removing markers. Note that modifying, especially removing, Skeleton markers is not recommended since changes to default templates may negatively affect the Skeleton tracking when done incorrectly. Removing too many markers may result in poor Skeleton reconstructions while adding too many markers may lead to labeling swaps. If any modification is necessary, try to keep the changes minimal.
You can add or remove Marker Constraints from a Rigid Body or a Skeleton using the Builder pane. This is basically adding or removing markers to the existing Rigid Body and/or Skeleton definition. Follow the below steps to add or remove markers:
To Add
Select a Skeleton segment that you wish to add extra markers onto.
Then, CTRL + left-click on the marker that you wish to add to the template.
On the Marker Constraints tool in the Builder pane, click + to add and associate the selected marker to the selected segment.
Reconstruct and Auto-label the Take.
To Remove
[Optional] Under the advanced properties of the target Skeleton, enable Marker Lines property to view which markers are associated with different Skeleton bones.
Select the Skeleton segment that you wish to modify and select the associated Marker Constraints that you wish to dissociate.
Delete the association by clicking on the "-" in the Constraints pane while a marker is selected in the Constraints pane.
Reconstruct and Auto-label the Take.
When asset definitions are exported to a MOTIVE user profile, the profile stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new asset in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use the File menu → Export Profile to export other software settings including the assets.
To export Skeleton constraints XML file
To import Skeleton constraints XML file
This page provides some information on aligning a Rigid Body pivot point with a real object replicated 3D model.
Screenshots used on this page was captured in Motive 2.x. In Motive 3.x, translation of Rigid Body pivot point can be done by using the Rigid Body translations from the . See below image for a screenshot of 3.x for the Builder and Properties pane of a Rigid Body.
When using streamed Rigid Body data to animate a real-life replicate 3D model, the alignment of the pivot point is necessary. In other words, the location of the Rigid Body pivot coincides with the location of the pivot point in the corresponding 3D model. If they are not aligned accurately, the animated motion will not be in a 1:1 ratio compared to the actual motion. This alignment is commonly needed for real-time VR applications where real-life objects are 3D modeled and animated in the scene. The suggested approaches for aligning these pivot points will be discussed on this page.
There are two methods for doing this. Using a measurement probe to sample 3D points to reference from, or simply using a reference grayscale view to align. The first method of creating and using a measurement probe is most accurate and recommended.
Step 1. Create a Rigid Body of the target object
First of all, create a Rigid Body from the markers on the target object. By default, the pivot point of the Rigid Body will be positioned at the geometrical center of the marker placement. Then place the object onto somewhere stable where it will stay stationary.
Step 2. Create a measurement probe.
Step 3. Collect data points to outline the silhouette
Step 4. Attach 3D model
From the sampled 3D points, You can also export markers created from the probe to Maya or other content creation packages to generate models guaranteed to scale correctly.
Step 5. Translate the pivot point
Step 6. Copy transformation values
Step 7. Zero all transformation values in the Attached Geometry section
Once the Rigid Body pivot point has been moved using the Builder pane, zero all of the transformation configurations under the Attached Geometry property for the Rigid Body.
This page provides detailed instructions on camera system calibration and information about the .
Calibration is essential for high quality optical motion capture systems. During calibration, the system computes position and orientation of each camera and amounts of distortions in captured images, and they are used constructs a 3D capture volume in Motive. This is done by observing 2D images from multiple synchronized cameras and associating the position of known calibration markers from each camera through triangulation.
Please note that if there is any change in a camera setup over the course of capture, the system must be recalibrated to accommodate for changes. Moreover, even if setups are not altered, calibration accuracy may naturally deteriorate over time due to ambient factors, such as more or less light entering the capture volume as the day progresses and fluctuation in temperature. Thus, for accurate results, it is recommended to periodically calibrate the system.
Prepare and optimize the capture volume for setting up a motion capture system.
Apply masks to ignore existing reflections in the camera view.
Collect calibration samples through the wanding process.
Review the wanding result and apply calibration.
Set the ground plane to complete the system calibration.
Cameras need to be appropriately placed and configured to fully cover the capture volume.
Each camera must be mounted securely so that they remain stationary during capture.
Motive's camera settings used for calibration should ideally remain unchanged throughout the capture. Re-calibration may be required if there is any significant modifications to the settings that influence the data acquisition, such as camera settings, gain settings, and Filter Switcher settings.
Before performing system calibration, all extraneous reflections or unnecessary markers should ideally be removed or covered so that they are not seen by the cameras. If this is not possible, extraneous reflections can be ignored by applying masks over them in Motive.
Active Wanding:
Applying masks to camera views only applies to calibration wands with passive markers. Active calibration wands are capable of calibrating the capture volume while the LEDs of all the cameras are turned off. If the capture has a large amount reflective material that cannot be moved, this method highly recommended.
Check the corresponding camera view to identify where the extraneous reflection is coming from, and if possible, remove them from the capture volume or cover them so that the cameras do not see them.
Masking from the Cameras Viewport
The wanding process is the core pipeline for collecting calibration sample into Motive. A calibration wand is waved in front of the cameras repeatedly throughout the volume, allowing all cameras to see the calibration markers. Through this process, each camera captures sample data points in order to compute their respective position and orientation in the 3D space.
It is important to understand the requirements of good wanding samples. For a streamline process, the following requirements must be met:
At least two, or more, cameras must see all of the three calibration markers simultaneously.
Cameras should only see calibration markers. If any other reflection or noise is detected during the wanding process, the sample will not be collected and may affect the calibration result negatively. For this reason, person who is wanding should not be wearing anything reflective.
The markers on the calibration wand must be in good quality. If the marker surface is damaged or scuffed, the system may struggle to collect wanding samples.
There are different types of calibration wands suited for different capture applications.\
Calibration Wands
CW-500: The CW-500 calibration wand has a wand-width of 500mm when the markers are placed in the configuration A. This wand is suitable for calibrating a large size capture volume because the markers are spaced out further apart, allowing the cameras to easily capture individual markers even at long distances.
CW-500 Active:Hosting the same dimensions as the CW-500, the active version is recommended for capture volumes that have a large amount of reflective material that cannot be removed. This wand calibrates the volume while the LEDs of all mounted cameras are turned off.
CW-250: The CW-250 calibration wand has a wand-width of 250mm. This wand is suitable for calibrating small to medium size volumes. With narrower wand-width, it allows cameras, that are set up in a smaller volume, to be able to easily capture all three calibration markers within the same frame. CW-500 wand can also be used like CW-250 wand if the markers are positioned at configuration B.
CWM-125 / CWM-250: Both CWM-125 and CWM-250 wands are designed for calibrating the system for precision capture applications. The accuracy of the calibrated wand width is most precise and reliable on these wands, and they are most suitable for doing precision capture in a small volume capture applications.
Before starting the wanding process, if any of the cameras are detecting extraneous reflections, return to the masking steps and make sure they are either masked or removed.
Set the Calibration Type. If you are calibrating a new capture volume, choose Full Calibration.
Under the Wand settings, specify the wand that you will be using to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly.
Double check the calibration setting. Once confirmed, press Start Wanding to start collecting the wanding sample. Here, do not have any specific camera selected if you wish to perform calibration for the entire camera system.
Wanding Tips
Avoid waving the wand too fast. This may introduce bad samples.
Avoid wearing reflective clothing or accessories while wanding. This can introduce extraneous samples which can negatively affect the calibration result.
Try not to collect samples beyond 10,000. Extra samples could negatively affect the calibration.
Try to collect wanding samples covering different areas of each camera view. The status indicator on Prime cameras can be used to monitor the sample coverage on individual cameras.
Although it is beneficial to collect samples all over the volume, it is sometimes useful to collect more samples in the vicinity of the target regions where more tracking is needed. By doing so, calibration results will have a better accuracy in the specific region.
Marker Labeling Mode
When performing calibration wanding, please make sure the Marker Labeling Mode is set to the default Passive Markers Only setting. This setting can be found under Application Settings: Application Settings → Live-Reconstruction tab → Marker Labeling Mode. There are known problems with wanding in one of the active marker labeling modes. This applies for both passive marker calibration wands and IR LED wands.
For Prime series cameras, the LED indicator ring displays the status of the wanding process. As soon as the wanding is initiated, the LED ring will turn dark. When a camera is detecting all three markers on the calibration wand, a part of its LED ring will glow blue to indicate that the camera is collecting samples, and the clock-position of the blue light will indicate the wand position in the respective camera view. As calibration samples are collected by each camera, green lights will fill up around the ring to provide feedback on whether enough samples have been collected. Eventually, we want all of the cameras to be filled with a bright green light to make sure enough samples covering all areas of the camera view are collected. Also, starting from Motive 3.0, any cameras that do not have enough samples collected towards the end of the wanding process, the ring light will start glow in white.
Calibration Type
You can selected different calibration types before wanding: Full and Refine
Full: Calibrate cameras from scratch, discarding any prior known position of the camera group or lens distortion information. A Full calibration will also take the longest time to run.
Refine: Adjusts slight changes on the calibration of the cameras based on prior calibrations. This will solve faster than a Full calibration. Only use this if your previous calibration closely reflects the placement of cameras. In other words, Refine calibration only works if you do not move the cameras significantly from when you last calibrated them. Only slight modifications can be allowed in camera position and orientation, which often occurs naturally from the environment such as mount expansion.
Refinement results will be poor if a full calibration has not been completed previously on the selected cameras.
Calibration Result
The final step of the calibration process is setting the ground plane and the origin. This is accomplished by placing the calibration square in your volume and telling Motive where the calibration square is. Place the calibration square inside the volume where you want the origin to be located and the ground plane to be leveled to. The position and orientation of the calibration square will be referenced for setting the coordinate system in Motive. Align the calibration square so that it references the desired axis orientation.
The longer leg on the calibration square will indicate the positive z axis, and shorter leg will indicate the direction of the positive x axis. Accordingly, the positive y axis will automatically be directed upward in a right-hand coordinate system. Next step is to use the level indicator on the calibration square to ensure the orientation is horizontal to the ground. If any adjustment is needed, rotate the nob beneath the markers to adjust the balance of the calibration square.
Custom calibration square can also be used to define the ground plane. A set of three markers will be needed, and for accurate ground plane, these markers need to form a right-angle with one arm longer than the other, just like the shape of the calibration square. When using a custom calibration square, select Custom in the drop-down menu, manually input the correct vertical offset and select the markers before setting the ground plane.
Vertical offset
Ground Plane Refinement feature is used to improve the leveling of the coordinate plane. To refine the ground plane, use the bottom page selector to access the refine page. Then, place several markers with a known radius on the ground, and adjust the vertical offset value to the corresponding radius. You can then select these markers in Motive and press Refine Ground Plane, and it will refine the leveling of the plane using the position data from each marker. This feature is especially useful when establishing a ground plane for a large volume, because the surface may not be perfectly uniform throughout the plane.
Note: Whenever there is a change to the system setup (e.g. cameras moved) these calibration files will no longer be relevant and the system will need to be recalibrated.
Enabling/Disabling Continuous Calibration
When capturing throughout a whole day, temperature fluctuations may degrade calibration quality and you will want to recalibrate the capture volume at different times of the day. However, repeating entire calibration process could be tedious and time-consuming especially with a high camera count setup. In this case, instead of repeating the entire calibration process, you can just record Takes with the wand waves and the calibration square, and use the take to re-calibrate the volume in the post-processing. This offline calibration can save calibration calculation time on the capture day because you can process the recorded wanding take in the post-processing instead. Also, the users can inspect the collected capture data and decide to re-calibrate the recorded Take only when any signs of degraded calibration quality is seen from the captures.
Offline Calibration Steps
1) Capture wanding/ground plane takes. At different times of the day, record wanding Takes that closely resembles the calibration wanding process. Also record corresponding ground plane Takes with calibration square set in the volume for defining the ground plane.
Whenever a system is calibrated, a Calibration Wanding file gets saved and it could be used to reproduce the calibration file through the offline calibration process.
2) Load the recorded Wanding _Take_. If you wish to re-calibrate the cameras for captured Takes during playback, load the wanding take that was recorded around the same time.
6) Load the recorded Ground Plane _Take_.
7) Open the saved calibration file. With the Ground Plane Take loaded in Motive, open the exported calibration file, and the saved camera calibration will be applied to the ground plane take.
8) Motive: Perspective View. From 2D data of the Ground Plane Take, select the calibration square markers.
10) Motive: Perspective View. Switch back to the Live mode. The recorded Take is now re-calibrated.
The partial calibration feature allows you to update the calibration for some selection of cameras in a system. The way this feature works is by updating the position of the selected cameras relative to the already calibrated cameras. This means that you only need to wand in front of the selected cameras as long as there is at least one unselected camera that can also see the wand samples.
This feature is especially helpful for high camera count systems where you only need to adjust a few cameras instead of re-calibrating the whole system. One common way to get into this situation is by bumping into a single camera. Partial calibrations allow you to quickly re-calibrate the single bumped camera that is now out of place. This feature is also useful for those who need to do a calibration without changing the location of the ground plane. The reason the ground plane does not need to be reset is because as long as there is at least one unselected camera Motive can use that camera to retain the position of the ground plane relative to the cameras.
Partial Calibration Steps
Set Calibration Type: In most cases you will want to set this to Full, but if the camera only moved slightly Refine works as well.
Specify the wand type.
Choose Calibrate Selected Cameras from the dialogue window.
Wave the calibration wand mainly within the view of the selected cameras.
Click Calculate. At this point, only the selected cameras will have their calibration updated.
Notes:
This feature relies on the fact that the unselected cameras are in a good calibration state. If the unselected cameras are out of calibration, then using this feature will return bad calibration.
Partial calibration does not update the calibration of unselected cameras. However, the calibration report that Motive provides does include all cameras that received samples, selected or unselected.
The partial calibration process can also be used for adding new cameras onto existing calibration. Use Full calibration type in this case.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
The OptiTrack motion capture system is designed to track retro-reflective markers. However, active LED markers can also be tracked with appropriate customization. If you wish to use Active LED markers for capture, the system will ideally need to be calibrated using an active LED wand. Please contact us for more details regarding Active LED tracking.
In Motive, Rigid Body assets are used for tracking rigid, unmalleable, objects. A set of markers get securely attached to tracked objects, and respective placement information gets used to identify the object and report 6 Degree of Freedom (6DoF) data. Thus, it's important that the distances between placed markers stay the same throughout the range of motion. Either passive retro-reflective markers or active LED markers can be used to define and track a Rigid Body. This page details instructions on how to create rigid bodies in Motive and other useful features associated with the assets.
A Rigid Body in Motive is a collection of three or more markers on an object that are interconnected to each other with an assumption that the tracked object is unmalleable. More specifically, it assumes that the spatial relationship among the attached markers remains unchanged and the marker-to-marker distance does not deviate beyond the allowable tolerance defined under the corresponding Rigid Body properties. Otherwise, involved markers may become . Cover any reflective surfaces on the Rigid Body with non-reflective materials, and attach the markers on the exterior of the Rigid Body where cameras can easily capture them.
Tip: If you wish to get more accurate 3D orientation data (pitch, roll, and yaw) of a Rigid Body, it is beneficial to spread markers as far as you can within the same Rigid Body. By placing the markers this way, any slight deviation in the orientation will be reflected from small changes in the position.
In a 3D space, a minimum of three coordinates is required for defining a plane using vector relationships; likewise, at least three markers are required to define a Rigid Body in Motive. Whenever possible, it is best to use 4+ markers to create a Rigid Body. Additional markers provide more 3D coordinates for computing positions and orientations of a rigid body, making overall tracking more stable and less vulnerable to marker occlusions. When any of markers are occluded, Motive can reference to other visible markers to solve for the missing data and compute position and orientation of the rigid body.
However, placing too many markers on one Rigid Body is not recommended. When too many markers are placed in close vicinity, markers may overlap on the camera view, and Motive may not resolve individual reflections. This may increase the likelihood of label-swaps during capture. Securely place a sufficient number of markers (usually less than 10) just enough to cover the main frame of the Rigid Body.
Tip: The recommended number of markers per a Rigid Body is 4 ~ 12 markers. Rigid Body cannot be created with more than 20 markers in Motive.
Within a Rigid Body asset, its markers should be placed asymmetrically because this provides a clear distinction of orientations. Avoid placing the markers in symmetrical shapes such as squares, isosceles, or equilateral triangles. Symmetrical arrangements make asset identification difficult, and they may cause the Rigid Body assets to flip during capture.
When tracking multiple objects using passive markers, it is beneficial to create unique Rigid Body assets in Motive. Specifically, you need to place retroreflective markers in a distinctive arrangement between each object, and it will allow Motive to more clearly identify the markers on each Rigid Body throughout capture. In other words, their unique, non-congruent, arrangements work as distinctive identification flags among multiple assets in Motive. This not only reduces processing loads for the Rigid Body solver, but it also improves the tracking stability. Not having unique Rigid Bodies could lead to labeling errors especially when tracking several assets with similar size and shape.
Note for Active Marker Users
What Makes Rigid Bodies Unique?
The key idea of creating unique Rigid Body is to avoid geometrical congruency within multiple Rigid Bodies in Motive.
Unique Marker Arrangement. Each Rigid Body must have a unique, non-congruent, marker placement creating a unique shape when the markers are interconnected.
Unique Marker-to-Marker Distances. When tracking several objects, introducing unique shapes could be difficult. Another solution is to vary Marker-to-marker distances. This will create similar shapes with varying sizes, and make them distinctive from the others.
Unique Marker Counts Adding extra markers is another method of introducing the uniqueness. Extra markers will not only make the Rigid Bodies more distinctive, but they will also provide more options for varying the arrangements to avoid the congruency.
What Happens When Rigid Bodies Are Not Unique?
Multiple Rigid Bodies Tracking
Depending on the object, there could be limitations on marker placements and number of variations of unique placements that could be achieved. The following list provides sample methods for varying unique arrangements when tracking multiple Rigid Bodies.
1. Create Distinctive 2D Arrangements. Create distinctive, non-congruent, marker arrangements as the starting point for producing multiple variations, as shown in the examples above.
2. Vary heights. Use marker bases or posts, with different heights to introduce variations in elevation to create additional unique arrangements.
3. Vary Maximum Marker to Marker Distance. Increase or decrease the overall size of the marker arrangements.
4. Add Two (or more) Markers Lastly, if an additional variation is needed, add extra markers to introduce the uniqueness. We recommended adding at least two extra markers in case any of them is occluded.
A set of markers attached to a rigid object can be grouped and auto-labeled as a Rigid Body. This Rigid Body definition can be utilized in multiple takes to continuously auto-label the same Rigid Body markers. Motive recognizes the unique spatial relationship in the marker arrangement and automatically labels each marker to track the Rigid Body. At least three coordinates are required to define a plane in 3D space, and therefore, a minimum of three markers are essential for creating a Rigid Body.
Step 1.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the Rigid Body from.
Step 3.
Click Create to define a Rigid Body asset from the selected markers.
You can also create a Rigid Body by doing the following actions while the markers are selected:
Perspective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create Rigid Body hotkey (Default: Ctrl +T).
Step 4.
Defining Assets in Edit mode:
Default Properties
Modifying Properties
You can add or remove Marker Constraints from a Rigid Body in the Constraints pane.
To add a marker you can select the marker in the Perspective view and make sure an existing Rigid Body is selected from the dropdown in the Constraints pane.
Once selected you can click the '+' in the Constraints pane to add the marker to the Rigid Body.
To remove a marker from the Rigid Body, simply select the marker in the Constraints pane and click '-'.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Rigid Body tracking data can be either outputted onto a separate file or streamed to client applications in real-time:
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
This feature is supported in _Live Mode_** only.**
This feature is supported in _Live Mode_** only.**
The Rigid Body refinement tool improves the accuracy of Rigid Body calculation in Motive. When a Rigid Body asset is initially created, Motive references only a single frame for defining the Rigid Body definition. The Rigid Body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the Rigid Body as well as the position and orientation of the Rigid Body itself.
Steps
Select the Rigid Bodies from the Type dropdown menu.
Hold the physical selected Rigid Body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the Rigid Body.
Slowly rotate the Rigid Body to collect samples at different orientations until the progress bar is full.
Once all necessary samples are collected, the Refine and Create + Refine buttons will appear again in the Builder pane and the refinements will have been applied.
This page provides detailed information on the continuous calibration feature, which can be enabled from the .
The Continuous Calibration feature ensures your system always remains optimally calibrated, requiring no user intervention to maintain the tracking quality. It uses highly sophisticated algorithms to evaluate the quality of the calibration and the triangulated marker positions. Whenever the tracking accuracy degrades, Motive will automatically detect and update the calibration to provide the most globally optimized tracking system.
Ease of use. This feature provides much easier user experience because the capture volume will not have to be re-calibrated as often, which will save a lot of time. You can simply enable this feature and have Motive maintain the calibration quality.
Optimal tracking quality. Always maintains the best tracking solution for live camera systems. This ensures that your captured sessions retain the highest quality calibration. If the system receives inadequate information from the environment, the calibration with not update and your system never degrades based on sporadic or spurious data. A moderate increase in the number of real optical tracking markers in the volume and an increase in camera overlap improves the likelihood of a higher quality update.
Works with all camera types. Continuous calibration works with all OptiTrack camera models.
For continuous calibration to work as expected, the following criteria must be met:
Markers Must Be Tracked. Continuous calibration looks at tracked reconstructions to assess and update the calibration. Therefore, at least some number of markers must be tracked within the volume.
Majority of Cameras Must See Markers. A majority of cameras in a volume needs to receive some tracking data within a portion of their field of view in order to initiate the calibration process. Because of this, traditional perimeter camera systems typically work the best. Each camera should additionally see at least 4 markers for optimal calibration. If not all the cameras see the markers at the same time, anchor markers will need to be set up to improve the calibration updates.
Anchor markers can be set up in Motive to further improve continuous calibration. When properly configured, anchor markers improve continuous calibration updates, especially on systems that consists of multiple sets of cameras that are separated into different tracking areas, by obstructions or walls, without camera view overlap. It also provides extra assurance that the global origin will not shift during each updates; although the continuous calibration feature itself already checks for this.
Follow the steps below for setting up the anchor marker in Motive:
Adding Anchor Markers in Motive
Place any number of markers in the volume to assign them as the anchor markers.
Make sure these markers are securely fixed in place within the volume. It's important that the distances between these markers do not change throughout the continuous calibration updates.
In the 3D viewport, select the markers that are going to be assigned as anchors.
Click on Add to add the selected markers as anchor markers.
Once markers are added as anchor markers, magenta spheres will appear around the markers indicating the anchors have been set.
Add more anchors as needed, again, it's important that these anchor markers do not move throughout the tracking. Also when the anchor markers need to be reset, whether if the marker was displaced, you can clear the anchor markers and reassign them.
For multi-room setups, it is useful to group cameras into partitions. This allows for Continuous Calibration to run in each individual room without the need for camera view overlap.
From the Properties pane of a camera you can assign a Partition ID from the advanced settings.
You'll want to assign all the cameras in the same room the same Partition ID. Once assigned these cameras will all contribute to Continuous Calibration for their particular space. This will help ensure the accuracy of Continuous Calibration for each individual space that is a part of the whole system.
This notice indicates the need for more markers to be visible by a particular camera. For instance, if camera 2 is not seeing enough markers in its camera view, the Log pane will inform you that you need more markers for that particular camera.
This indicates the need for more markers to be spread in more areas of the camera view.
Following the Motive 3.0.2 release, an internet connection is no longer required for initial use of Motive. If you are currently using Motive 3.0.1 or older, please install this new release from our webpage. Please note that an internet connection is still required to download Motive.exe from the OptiTrack website.
Important Update:
New licensing system in Motive 3. Please check the for details on Motive licenses.
Security Key (Motive 3.x): Starting from version 3.0, a USB Security Key will be required to use Motive. USB Hardware Keys that were used for activating older versions of Motive will no longer work with 3.0, and they will need to be replaced with the USB Security key. For any questions, please contact us.
Hardware Key (Motive 2.x or below): Motive 2.x versions will still require USB Hardware Key.
USB Cameras
USB cameras, including Flex series, tracking bars, and Slim3U, cameras are not supported in 3.x versions currently. For USB camera systems, please use Motive 2.x versions. Go to .
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
To install Motive, you must first download the Motive installer from our website. Follow the Downloads link under the Support page (), and you will be able to find the newest version of Motive or the previous releases if needed. Both Motive: Body and Motive: Tracker use the same software installer.
1. Run the Installer
When the download is complete, run the installer to initiate the installation process.
2. Install the USB Driver and Dependencies
If you are installing Motive for the first time, it will prompt you to install the OptiTrack USB Driver. This driver is required for all OptiTrack USB devices including the Security Key. You may also need to install other dependencies such as the C++ redistributable. After all dependencies have been installed, continue onto installing Motive.
3. Install Motive
Follow the installation prompts and install Motive in your desired file directory. We recommend installing the software in the default directory, C:\Program File\OptiTrack\Motive
.
4. OptiTrack Peripheral Module
At the Custom Setup section of the installation process, you will be asked to choose whether to install the Peripheral Devices along with Motive. If you plan to use force plate, NI-DAQ, or EMG devices along with the motion capture system, then make sure the Peripheral Device is installed. If you are not going to be using these devices, you may skip to the next step.
Peripheral Module NI-DAQ
If you decided to install the Peripheral Device, then you will be prompted to install the OptiTrack Peripherals Module along with NI-DAQmx driver at the end of the Motive installation. Press Yes to install the plugins and the NI-DAQmx driver. This may take a few minutes to install. This only needs to be done one time.
5. Finish Installation
Firewall / Anti-Virus
Make sure all antivirus software on the Host PC allows Motive.
For Ethernet cameras, make sure the windows firewall is configured to allow the camera network to be recognized. Disabling the firewall entirely is another option.
High-Performance
Windows power saving mode limits CPU usage. In order to best utilize Motive, set this mode to the High Performance mode and remove the limitations. You can configure the High Performance Mode from Control Panel → Hardware and Sound → Power Options
as shown in the image below.
Graphics Card Settings
This is only for computers with integrated graphics.
For computers with integrated graphics, please make sure Motive is set to run on the dedicated graphics card. If the host computer has integrated graphics on the CPU, the PC may switch to using integrated graphics when the computer goes to sleep mode, and when this happens, the viewport may go unresponsive when it exits out of sleep mode. If you have integrated graphics on the computer, go to the Graphics Settings on Windows, and browse Motive to set it as high-performance graphics.
Once you have installed Motive, the next step is to activate the software using the provided license information and a USB Security Key. Motive activation requires a valid Motive 3.0 license, a USB Security Key, and a computer with USB C ports or an adapter for USB A to USB C.
For Motive 3.0 and above, a USB Security Key is required to use the camera system. This key is different from the previous Hardware Key and it improves the security of the camera system. The Security Keys will need to be purchased separately. For more information, please refer to the following page:
There are five different types of Motive licenses: Motive:Body-Unlimited, Motive:Body, Motive:Tracker, Motive:Edit-Unlimited, and Motive:Edit. Each license unlocks different features in the software depending on the use case that the license is intended to facilitate.
The Motive:Body and Motive:Body-Unlimited licenses are intended for either small (up to 3) or large-scale Skeleton tracking applications.
The Motive:Tracker license is intended for real-time Rigid Body tracking applications.
The Motive:Edit and Motive:Edit Unlimited licenses are intended for users modifying data after it has been captured already.
Step 1. Launch Motive
First, launch Motive.
Step 2. Activate
The Motive splash screen will pop up and it will indicate that the license is not found. Click to open the license tool and fill out the following fields using provided license information. You will need the License Serial Number and License Hash from your order invoice and the Hardware Key Serial Number indicated on the USB security key or the hardware key. Once you have entered all the information, click Activate. If you have already activated the license before on another machine, make sure the same name is entered when activating.
Online Activation Tool
The Motive License can also be activated from online using the Online License Activation tool. When you use the online License Activation Tool, you will receive the license file via email. In this case, you will have to place the file in the license folder. Once the license file is placed, insert the corresponding USB Hardware Key to use Motive.
Step 3. License File
If Motive is activated properly, license files will be placed in the license folder. This folder can be accessed from the splash screen or by navigating to Start Menu → All Programs → OptiTrack → OptiTrack License Folder
.
License Folder: C:\ProgramData\OptiTrack\License
Step 4. Security Key
If not already done, insert the corresponding Security Key that was used to activate the license. The matching security key must be connected to the computer in order to use Motive.
Notes on Connecting the Security Key
Connect the Security Key to a USB port where the USB bus does not have a lot of traffic. This is important especially if you have other peripheral devices that connect to the computer via USB ports. If there is too much data flowing through the USB bus used by the Security Key, Motive might not be able to connect the cameras.
Make sure the USB Hardware Key is unplugged. If both the Hardware Key and the Security Key are plugged into the same computer, Motive may not activate properly.
About Motive
You can also check the status of the activated license from the About Motive pop-up. This can be accessed in the splash screen when it fails to detect a valid license, or it can be accessed from the Help
``→``
About Motive
menu in Motive.
License Data:
In this panel, you can also export license data into a TXT file by clicking on the License Data.... If you are having any issues with activating Motive, please export and attach the license data file in the email.
OptiTrack software can be used on a new computer by reactivating the license, using the same license information. When reactivating, make sure to enter the same name information as before. After the license has been reactivated, the corresponding USB Security Key needs to be inserted into the PC in order to verify and run the software.
Another method of using the license is by copying the license file from the old computer to the new computer. The license file can be found in the OptiTrack License folder which can be accessed through the Motive Splash Screen or top Help menu in Motive.
For more information on licensing of Motive, refer to the Licensing FAQs from the OptiTrack website:
For more questions, contact our Support:
When contacting support, please attach the license data (TXT) file exported from the About Motive panel as a reference.
Please note that the following tutorial videos were created in an older version of Motive. The workflow in 3.0 is slightly different and only requires you to select Translate, Rotate, or Scale from the to begin manipulating your Asset.
All markers need to be placed at respective anatomical locations of a selected Skeleton as shown in the . Skeleton markers can be divided into two categories: markers that are placed along joint axes (joint markers) and markers that are placed on body segments (segment markers).
Segment markers are markers that are placed on Skeleton body segments, but not around a joint. For best tracking results, each segment marker placement must be incongruent to an associated segment on the opposite side of the Skeleton (e.g., left thigh and right thigh). Also, segment markers must be placed asymmetrically within each segment for the best tracking results. This helps the Skeleton solve to thoroughly distinguish, left-side and right-side of the corresponding Skeleton segments throughout the capture. This asymmetrical placement is also emphasized in the avatars shown in the Builder pane. Segment markers that can be slightly moved to different places on the same segment are highlighted on the 3D avatar in the Skeleton creation window on the .
See also:
When using the biomechanics Marker Sets, markers must be placed precisely with extra care because these placements directly relate to coordinate system definition of each respective segment, affecting the resulting biomechanical analysis. The markers need to be placed on the skin for direct representation of the subject’s movement. Mocap suits are not suitable for biomechanic applications. While the basic marker placement must follow the avatar in the Builder pane, additional details on the accurate placements can be found on the following page: .
From the Skeleton creation options on the , select a Skeleton Marker Set template from the Template drop-down menu. This will bring up a Skeleton avatar displaying where the markers need to be placed on the subject.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best Skeleton tracking. Thus, extra attention is needed when placing the .
The magenta markers indicate the that can be placed at a slightly different position within the same segment.
Double-check the marker counts and their placements. It may be easier to use the in Motive to do this. The system should be tracking the attached markers at this point.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the Skeleton markers are not automatically detected, manually select the Skeleton markers from the .
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the Marker Set template. Or, you can also assign custom labels by loading previously prepared files in the label section.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created Skeleton will be calibrated from it. For more details, read the section.
If you are creating a Skeleton in the post-processing of captured data, you will have to the Take to see the Skeleton modeled and tracked in Motive.
By configuring , you can modify the display settings as well as Skeleton creation pose settings for Skeleton assets. For newly created Skeletons, default Skeleton creation properties are configured under the pane. Properties of existing, or recorded, Skeleton assets are configured under the while the respective Skeletons are selected in Motive.
The A-pose is another type of calibration pose that is used to create Skeletons. Set the Skeleton Create Pose setting to the A-pose you wish to calibrate with. This pose is especially beneficial for subjects who have restrictions in lifting the arm. Unlike the T-pose, arms are abducted at approximately 40 degrees from the midline of the body, creating an A-shape. There are three different types of A-pose: Palms down, palms forward, and elbows bent.
After creating a Skeleton from the , calibration markers need to be removed. First, detach the calibration markers from the subject. Then, in Motive, right-click on the Skeleton in the perspective view to access the context menu and click Skeleton → Remove Calibration Markers. Check the to make sure that the Skeleton no longer expects markers in the corresponding medial positions.
To recalibrate Skeletons, select all of the associated Skeleton markers from the perspective view and click Recalibrate From Markers which can be found in the Skeleton context menu from either the or the . When using this feature, select a Skeleton and the markers that are related to the corresponding asset.
Skeleton marker colors and marker sticks can be viewed in the pane. They provide color schemes for clearer identification of Skeleton segments and individual marker labels from the perspective viewport. To make them visible, enable the Marker Sticks and Marker Colors under the visual aids in the pane. A default color scheme is assigned when creating a Skeleton asset. To modify marker colors and labels, you can use the .
Constraints store information on marker labels, colors, and marker sticks which can be modified, exported and re-imported as needed. For more information on doing this, please refer to the page.
When adding, or removing, markers in the Edit mode, the Take needs to be again to re-label the Skeleton markers.
Access the Modify tab on the .
When you add extra markers to Skeletons, the markers will be labeled as Skeleton_CustomMarker#. You can use the to change the label as needed.
Enable selection of Marker Constraints from the visual aids option in .
Access the Modify tab on the .
Assets can be exported into the Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that contains various configuration settings in Motive, including the asset definitions.
There are two ways of obtaining Skeleton joint angles. Rough representations of joint angles can be obtained directly from Motive, but the most accurate representations of joint angles can be obtained by pipelining the tracking data into a third-party biomechanics analysis and visualization software (e.g. or ).
For biomechanics applications, joint angles must be computed accurately using the respective Skeleton model solve, which can be accomplished by using biomechanical analysis software. or stream tracking data from Motive and import into an analysis software for further calculation. From the analysis, various biomechanics metrics, including the joint angles can be obtained.
Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis. A rough representation of joint angles can be obtained by either exporting or streaming the Skeleton Rigid Body tracking data. When exporting the tracking data into CSV, set the export setting to Local to obtain bone segment position and orientation values in respect to its parental segment, roughly representing the joint angles by comparing two hierarchical coordinate systems. When streaming the data, set to true in the streaming settings to get relative joint angles.
Each Skeleton asset has its marker templates stored in an XML file. By exporting, customizing, and importing the constraint XML files, a Skeleton Marker Set can be modified. Specifically, customizing the XML files will allow you to modify Skeleton marker labels, marker colors, and marker sticks within a Skeleton asset. For detailed instructions on modifying Skeleton XML files, read through page.
To export a Skeleton XML file, right-click on a Skeleton asset under the Assets pane and use the feature to export corresponding Skeleton marker XML file.
You can import marker XML file under the Labels section of the when first creating a new Skeleton. To import a constraints XML file on an existing Skeleton, right-click on a Skeleton asset under the Assets pane and click Import Constraints.
For instructions on creating a measurement probe, please refer to page. You can purchase our probe or create your own. All you need is 4 markers with a static relationship to a projected tip.
Use the created measurement probe to collect that outlines the silhouette of your object. Mark all of the corners and other key features on the object.
After 3D data points have been generated using the probe, attach your game geometry (obj file) to the Rigid Body by turning on the property and importing the geometry under property.
Next step is to translate the 3D model so that the attached model aligns with the silhouette sample that we collected in Step 3. The model can be easily translated and rotated using the . Move, rotate, and scale the asset unit it is aligned with the silhouette.
For accurate alignment, it will be easier to decrease the size of the marker visual. This can be changed from the setting under the application settings panel.
After you have translated, rotated, and scaled the pivot point of the Rigid Body to align the attached 3D model with the sampled data points, the transformation values will be shown under the property.
Copy and paste this transformation parameter onto the Rigid Body location and orientation options under the Edit tab in the . This will translate the pivot point of the Rigid Body in Motive, and align it with the pivot point of the 3D model.
Alternatively, if probe method is not applicable, you can also switch one of the cameras into grayscale view, right click on the camera in the Cameras view and select Make Reference. This will create a Rigid Body overlay in the to align the Rigid Body pivot using the similar approach as above.
By default, Motive will start up on the calibration layout containing necessary panes for the calibration process. This layout can also be accessed by clicking on a calibration layout from the top-right corner , or by using the Ctrl+1 .
The will guide you through the calibration process. This pane can be accessed by clicking on the icon on the toolbar or by entering the calibration layout from the top-right corner . For a new system calibration, click the New Calibration button and it will take you to the next step.
When the cameras detect reflections in their view, it will be indicated with a warning sign to alert which cameras are seeing reflections; for Prime series cameras, the indicator LED ring will also light up in white.
Masks can be applied by clicking Mask in the , and it will apply red masks over all of the reflections detected in the 2D camera view. Once masked, the pixels in the masked regions will entirely be filtered out from the data. Please note that Masks get applied additively, so if there are already masks applied in the camera view, clear them out first before applying a new one.
Check the to see if any of the cameras are seeing extraneous reflections or noise in their view.
In the , click Mask to apply masks over all of the existing reflections in the view.
Masks can also be applied from the Cameras viewport if needed. In the view pane, while the is selected, click on the gear icon on the toolbar and options to apply auto-mask or clear existing masks will be listed. You can also click on the icon to switch to different modes for manually applying and/or erasing masks.
You should be careful when using the masking features because masked pixels are completely filtered from the . In other words, the data in masked regions will not be collected for computing the , and excessive use of masking may result in data loss or frequent marker occlusions. For this reason, all removable reflective objects must be taken out or covered before the using the masking tool so the masking can be minimized. After all reflections are removed or masked from the view, proceed onto the wanding process.
Start wanding. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Gently draw figure-eight repetitively with the wand to collect samples at varying orientations and cover as much space as possible for sufficient sampling. Wanding trails will be shown in colors on the . A table displaying the status of the wanding process will show up in the to monitor the progress. For best results, wand the volume evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
You'll want to wand until the camera squares in the turn from dark green (insufficient amount of samples) to light green (sufficient amount of samples). Once all the squares have turned light green the Start Calculating button will now be active.
After wanding throughout all areas of the volume, consult the each 2D view from the Camera Preview Pane to evaluate individual camera coverage. Each camera should be thoroughly covered with wand samples. If there are any large gaps, attempt to focus wanding on those to increase coverage. When sufficient amounts of calibration samples are collected by each camera, press Calculate in the , and Motive will start calculating the calibration for the capture volume. Generally, 1,000-4,000 samples are enough. Samples above this threshold are unnecessary and can actually be detrimental to a calibration's accuracy.
For more information on camera status indicators, please visit our wiki page .
After sufficient marker samples have been collected, press Start Calculating to calibrate using collected samples. The time needed for the calculation varies depending on the number of cameras included in the setup as well as the number of collected samples. As Motive starts calculating, blue wanding paths will be displayed on the view panes, and will provide visual feedback on calibration result of each camera. If you click Show list, you can check amount of error on each camera also.
Tip: Calibration details for recorded Takes can also be reviewed. Select a Take in the , and related calibration results will be displayed under the . This information is available only for Takes recorded in Motive 1.10 and above.
After the calculation, a calibration result will be reported in the . The result is directly related to the mean error and the calibration result tiers are (on order from worst to best): Poor, Fair, Good, Great, Excellent, and Exceptional. If the results are acceptable, press Continue to apply the calibration. If not, press cancel and repeat the wanding process. In general, if it reports anything below excellent, you might want to adjust camera settings, wanding techniques, and try again.
After confirming that the calibration square is properly placed and detected by the , press Set Ground Plane. You may need to manually select the markers on the ground plane if Motive fails to auto-detect the ground plane. If needed, the ground plane can be adjusted later.
The Vertical Offset is the offset distance between the center of markers on the and the actual ground. For custom calibration square, you will need to define this in order to take account of the offset distance and sets the global origin slightly below the markers. Accordingly, this value should correspond to the actual distance between the center of the marker and the lowest tip at the vertex of the calibration square. This setting can also be used when you want to place the ground plane at a specific elevation. A positive offset value will place the plane below the markers, and a negative value will place the plane above the markers.
If you wish to adjust position and orientation of the global origin after the capture has been taken, you can apply the capture volume translation and rotation from the . For applying changes to recorded Takes, Anew set of 3D data must be reconstructed from the recorded 2D data after the modification has been applied.
Calibration files can be used to preserve calibration results. The information from the calibration is exported or imported via the CAL file format. Calibration files reduce the effort of calibrating the system every time you open Motive. Calibration files will be automatically saved into the default folders after each calibration but in general, it is suggested to export calibration before each capture session. By default, Motive loads the last calibration file that was created, this can be changed via the .
The continuous calibration feature continuously monitors and refines the camera calibration to its best quality. When enabled, minor distortions to the camera system setup can be adjusted automatically without wanding the volume again. In other words, you can calibrate a camera system once and you will no longer have to worry about external distortions such as vibrations, thermal expansion on camera mounts, or small displacements on the cameras. For detailed information, read through the page.
Continuous calibration can be enabled, or disabled, from the once a system has been calibrated. It will also show when the continue calibration has updated last time.
3) Motive: . In the Edit mode, press Start Wanding. The wanding samples from recorded 2D data will be loaded.
4) Motive: . Press Calculate, and wait until the calculation process is complete.
5) Motive: . Apply Result and export the calibration file. File tab → Export Camera Calibration.
9) Motive: : Ground Plane. Set the Ground plane.
Open the .
From the , click Start Wanding. A pop-up dialogue will appear indicating that only selected cameras are being calibrated.
If markers on the calibration wand have been damaged, please to have them replaced.
If you are using for tracking multiple Rigid Bodies, it is not required to have unique marker placements. Through the active labeling protocol, active markers can be labeled individually and multiple rigid bodies can be distinguished through uniquely assigned marker labels. Please read through page for more information.
Having multiple non-unique Rigid Bodies may lead to mislabeling errors. However, in Motive, non-unique Rigid Bodies can also be tracked fairly well as long as the non-unique Rigid Bodies are continuously tracked throughout capture. Motive can refer to the trajectory history to identify and associate corresponding Rigid Bodies within different frames. In order to track non-unique Rigid Bodies, you must make sure the Properties → General Settings → Unique setting in of the assets are set to False.
Even though it is possible to track non-unique Rigid Bodies, it is strongly recommended to make each asset unique. Tracking of multiple congruent Rigid Bodies could be lost during capture either by occlusion or by stepping outside of the capture volume. Also, when two non-unique Rigid Bodies are positioned in vicinity and overlap in the scene, their marker labels may get swapped. If this happens, additional efforts will be required for in post-processing of the data.
Select all associated Rigid Body markers in the .
Assets pane: While the markers are selected in Motive, click on the add button in the .
Once the Rigid Body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created Rigid Body will be listed under the .
If the Rigid Bodies, or skeletons, are created in the Edit mode, the corresponding Take needs to be . Only then, the Rigid Body markers will be labeled using the Rigid Body asset and positions and orientations will be computed for each frame. If the 3D data have not been labeled after edits on the recorded data, the asset may not be tracked.
Rigid Body properties consist of various configurations of Rigid Body assets in Motive, and they determine how Rigid Bodies are tracked and displayed in Motive. For more information on each property, read through the page.
When a Rigid Body is first created, default Rigid Body properties are applied to the newly created assets. The default creation properties are configured under the Assets section in the panel.
Properties for existing Rigid Body assets can be changed from the .\
The pivot point of a Rigid Body is used to define both position and orientation. When a rigid body is created, its pivot point is be placed at its geometric center by default, and its orientation axis will be aligned with the global coordinate axis. To view the pivot point and the orientation in the 3D viewport, set the Bone Orientation to true under the display settings of a selected Rigid Body in the .
Position and orientation of a tracked Rigid Body can be monitored in real-time from the . You can simply select a Rigid Body in Motive, open the Info pane, and access the Rigid Bodies tool from the to view respective real-time tracking data of the selected Rigid Body.
As mentioned previously, the orientation axis of a Rigid Body, by default, gets aligned with the global axis when the Rigid Body was first created. After a Rigid Body is created, its orientation can be adjusted by editing the Rigid Body orientation using the or by using the GIZMO tools as described in the next section.
There are situations where the desired pivot point location is not at the center of a Rigid Body. The location of a pivot point can be adjusted by assigning it to a marker or by translating along the Rigid Body axis (x,y,z). For most accurate pivot point location, attach a marker on the desired pivot location, set the pivot point to the marker, and apply the translation for precise adjustments. If you are adjusting the pivot point after the capture, in the Edit mode, the Take will need to be again to apply the changes.
Read through the page for detailed information.
To translate the pivot point, access the Rigid Body editing tools in the while the Rigid Body is selected. In the Location section, you can input the amount of translation (in mm) that you wish to apply. Note that the translation will be applied along the x/y/z of the Rigid Body orientation axis. Resetting the translation will position the pivot point at the geometric center of the Rigid Body according to its marker positions.
If you wish to reset the pivot point, simply open the Rigid Body context menu in the and click Reset Pivot. The location of the pivot point will be reset back to the center of the Rigid Body again.
This feature is useful when tracking a spherical object (e.g. ball). The Spherical Pivot Placement feature in the Builder pane will assume that all the Rigid Body markers are placed on the surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. To do this, select a Rigid Body, access Modify tab in the , and click Apply from the Spherical Pivot Placement.
Captured 6 DoF Rigid Body data can be exported into CSV, FBX, or BVH files. See:
You can also use one of the streaming plugins or use NatNet client applications to receive tracking data in real-time. See:
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
Select from the toolbar at the top, open the .
In , select an existing Rigid Body asset that you wish to refine from the Assets pane.
Click Refine in the .
Live Mode Only. Continuous calibration only works in .
To enable Continuous Calibration, calibrate the camera system first and enable the Continuous Calibration setting at the bottom of the . Once enabled, Motive continuously monitors the residual values in captured marker reconstructions, and when the updated calibration is better than the existing one, it will get updated automatically. Please note that at least four (default) marker samples must be being tracked in the volume for the continuous calibration to work. You will also be able to monitor the sampling progress and when the calibration has been last updated.
First, make sure the entire camera volume is fully and prepared for marker tracking.
Open the and select the second page at the bottom to access the anchor marker feature.
In the event that you need to manually adjust cameras in the 3D view, you can enable Editable in 3D View in . To access this setting, you'll need to select Show Advanced from the 3-dot more options dropdown at the top. This will populate a Calibration section on this window.
This allows you to use the to Translate, Rotate, and Scale cameras to their desired locations.
For a full list of Log pane Continuous Calibration statuses, please see the page.
After you have completed all the steps above, Motive will be installed. If you want to use additional plugins, visit the page.
During installation, some antivirus programs (i.e. BitDefender and McAfee) may block Motive from being downloaded. Our software directly downloaded from is safe for use and will not harm your computer. If an antivirus program allows Motive to download, but you're still unable to view cameras in the Devices pane, or you are seeing frame/data drops, you'll need to reverify that your antivirus or firewall settings are allowing all traffic from your camera network to Motive and vice versa. In some rare cases with some antivirus software, you may need to completely uninstall the antivirus software if it continues to interfere with camera communication.
For more information on different types of Motive licenses, check the software comparison table on our or in the table below.
First of all, if you haven't already done so, make sure you have the software license. If you have successfully activated the license, there should be a license file (DAT) placed under the license folder directory C:\ProgramData\OptiTrack\License
If it's first time using the camera system with the key, make sure the computer has access to the Internet for the camera to go through the initial with the security key.
CS-100: Used to define a ground plane in a small, precise motion capture volumes.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 11.5 mm
Marker size: 9.5 mm (diameter)
Mean Ray Error
The Mean Ray Error reports a mean error value on how closely the tracked rays from each camera converged onto a 3D point with a given calibration. This represents the preciseness of the calculated 3D points during wanding. Acceptable value will vary depending on the size of the volume and camera count.
Mean Wand Error
The Mean Wand Error reports a mean error value of the detected wand length compared to the expected wand length throughout the wanding process.
Live Rigid Bodies
0
0
Unlimited
Unlimited
Unlimited
Live Skeletons
0
0
0
Up to 3
Unlimited
Edit Rigid Bodies
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Edit Skeletons
Up to 3
Unlimited
0
Up to 3
Unlimited
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better, running at 3 GHz or greater
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers and support for OpenGL 3.2+
OS: Windows 10, 11 (64-bit)
CPU: Intel i7, 3 GHz
RAM: 4GB of memory
GPU that supports OpenGL 3.2+
Labeling Pane in Motive The Edit Tools in Motive enables users to post-process tracking errors from recorded capture data. There are multiple editing methods available, and you need to clearly understand them in order to properly fix errors in captured trajectories. Tracking errors are sometimes inevitable due to the nature of marker-based motion capture systems. Thus, understanding the functionality of the editing tools is essential. Before getting into details, note that the post-editing of the motion capture data often takes a lot of time and effort. All captured frames must be examined precisely and corrections must be made for each error discovered. Furthermore, some of the editing tools implement mathematical modifications to marker trajectories, and these tools may introduce discrepancies if misused. For these reasons, we recommend optimizing the capture setup so that tracking errors are prevented in the first place.
Common tracking errors include marker occlusions and labeling errors. Labeling errors include unlabeled markers, mislabeled markers, and label swaps. Fortunately, label errors can be corrected simply by reassigning proper labels to markers. Markers may be hindered from camera views during capture. In this case, the markers will not be reconstructed into 3D space and introduce a gap in the trajectory, which are referred to as marker occlusions. Marker occlusions are critical because the trajectory data is not collected at all, and retaking the capture could be necessary if the missing marker is significant to the application. For these occluded markers, Edit Tools also provide interpolation pipelines to model the occluded trajectory using other captured data points. Read through this page to understand each of data editing methods in detail.
Steps in Editing
General Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
In some cases, you may wish to delete 3D data for certain markers in a Take file. For example, you may wish to delete corrupt 3D reconstructions or trim out erroneous movements from the 3D data to improve the data quality. In the Edit mode, reconstructed 3D markers can be deleted for selected range of frames. To delete a 3D marker, first select 3D markers that you wish to delete, and press the Delete key, and they will be completely erased from the 3D data. If you wish to delete 3D markers for a specific frame range, open the Graph Pane and select the frame ranges that you wish to delete the markers from, and press the Delete key. The 3D trajectory for the selected markers will be erased for the highlighted frame range.
Note: Deleted 3D data can be recovered by reconstructing and auto-labeling new 3D data from recorded 2D data.
The trimming feature can be used to crop a specific frame range from a Take. For each round of trim, a copied version of the Take will be automatically achieved and backed up into a separate session folder.
Steps for trimming a Take
1) Determine a frame range that you wish to extract.
2) Set the working range (also called as the view range) on the Graph View pane. All other frames outside of this range will be trimmed out. You can set the working range through the following approaches:
Specify the starting frame and ending frame from the navigation bar on the Graph Pane.
3) After zooming into the desired frame range, click Edit > Trim Current Range to trim out the unnecessary frames.
4) A dialog box will pop up asking to confirm the data removal. If you wish to reset the frame numbers upon trimming the take, select the corresponding check box on the pop-up dialog.
The first step in the post-processing is to check for labeling errors. Labels can be lost or mislabeled to irrelevant markers either momentarily or entirely during capture. Especially when the marker placement is not optimized or when there are extraneous reflections, labeling errors may occur. As mentioned in other pages, marker labels are vital when tracking a set of markers, because each label affects how the overall set is represented. Examine through the recorded capture and spot the labeling errors from the perspective view, or by checking the trajectories on the Graph pane for suspicious markers. Use the Labels pane or the Tracks View mode from the Graph pane to monitor unlabeled markers in the Take.
When a marker is unlabeled momentarily, the color of tracked marker switches between white (labeled) and orange (unlabeled) by the default color setting. Mislabeled markers may have large gaps and result in a crooked model and trajectory spikes. First, explore captured frames and find where the label has been misplaced. As long as the target markers are visible, this error can easily be fixed by reassigning the correct labels. Note that this method is preferred over editing tools because it conserves the actual data and avoids approximation.
Read more about labeling markers from the Labeling page.
The Edit Tools provide functionality to modify and clean-up 3D trajectory data after a capture has been taken. multiple post-processing methods are featured in the Edit Tools for different purposes: Trim Tails, Fill Gaps, Smooth, and Swap Fix. The Trim Tails method is used to remove data points in few frames before and after a gap. The Fill Gaps method calculates the missing marker trajectory using interpolation methods. The Smoothing method filters out unwanted noise in the trajectory signal. Finally, the Swapping method switches marker labels for two selected markers. Remember that modifying data using Edit Tools changes the raw trajectories, and an overuse of Edit Tools is not recommended. Read through each method and familiarize yourself with the Editing Tools. Note that you can undo and redo all changes made using Edit Tools.
Frame Range: If you have a certain frame range selected from the timeline, data edits will be applied to the selected range only.
The Tails method trims, or removes, a few data points before and after a gap. Whenever there is a gap in a marker trajectory, slight tracking distortions may be present on each end. For this reason, it is usually beneficial to trim off a small segment (~3 frames) of data. Also, if these distortions are ignored, they may interfere with other editing tools which rely on existing data points. Before trimming trajectory tails, check all gaps to see if the tracking data is distorted. After all, it is better to preserve the raw tracking data as long as they are relevant. Set the appropriate trim settings, and trim out the trajectory on selected or all frame. Each gap must satisfy the gap size threshold value for it to be considered for trimming. Each trajectory segment also needs to satisfy the minimum segment size, otherwise, it will be considered as a gap. Finally, the Trim Size value will determine how many leading and trailing trajectory frames are removed from a gap.
Smart Trim
The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points than others. This feature determines whether each end of the gap is suspicious with errors, and deletes an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Gap filling is the primary method in the data editing pipeline, and this feature is used to remodel the trajectory gaps with interpolated marker positions. This is used to accommodate the occluded markers in the capture. This function runs mathematical modeling to interpolate the occluded marker positions from either the existing trajectories or other markers in the asset. Note that interpolating a large gap is not recommended because approximating too many data points may lead to data inaccuracy.
New to Motive 3.0; For Skeletons and Rigid Bodies only Model Asset Markers can be used to fill individual frames where the marker has been occluded. Model Asset markers must be first enabled on the Properties Pane when the desired asset is selected and then they must be enabled for selection in the Viewport. Now when frames are encountered where the marker is lost from camera view, select the associated Model Asset Marker in the 3D view; right click for the context menu and select 'Set Key'.
First of all, set the Max. Gap Size value and define the maximum frame length for an occlusion to be considered as a gap. If a gap size has a longer frame length, it will not be affected by the filling mechanism. Set a reasonable maximum gap size for the capture after looking through the occluded trajectories. In order to quickly navigate through the trajectory graphs on the Graph Pane for missing data, use the Find Gap features (Find Previous and Find Next) and automatically select a gap frame region so the data could be interpolated. Then, apply the Fill Gaps feature while the gap region is selected. Various interpolation options are available in the setting including Constant, Linear, Cubic, Pattern-based, and Model-based.
There are four different interpolation options offered in Edit Tools: constant, linear, cubic and pattern-based. First three interpolation methods (constant, linear, and cubic) look at the single marker trajectory and attempt to estimate the marker position using the data points before and after the gap. In other words, they attempt to model the gap via applying different degrees of polynomial interpolations. The other two interpolation options (pattern-based and model-based) reference visible markers and models to the estimate occluded marker position.
Constant
Applies zero-degree approximation, assumes that the marker position is stationary and remains the same until the next corresponding label is found.
Linear
Applies first-degree approximation, assuming that the motion is linear, to fill the missing data. Only use this when you are sure that the marker is moving at linear motion.
Cubic
Applies third-degree polynomial interpolation, cubic spline, to fill the missing data in the trajectory.
Pattern based
This refers to trajectories of selected reference markers and assumes the target marker moves along in a similar pattern. The Fill Target marker is specified from the drop-down menu under the Fill Gaps tool. When multiple markers are selected, a Rigid Body relationship is established among them, and the relationship is used to fill the trajectory gaps of the selected Fill Target marker as if they were all attached to a same Rigid Body. The following list is the general workflow for using the Pattern Based interpolation:
Select both reference markers and the target marker to fill.
Examine the trajectory of the target marker from the Graph Pane: Size, range, and a number of gaps.
Set an appropriate Max. Gap Size limit.
Select the Pattern Based interpolation option.
Specify the Fill Target marker in the drop-down menu.
When interpolating for only a specific section of the capture, select the range of frames from Graph pane.
Click the Fill Selected/Fill All/Fill Everything.
The curves tool applies a noise filter (low-pass Butterworth, 4th degree) to trajectory data, and this modifies the marker trajectory smoother. This is a bi-directional filter that does not introduce phase shifts. Using this tool, any vibrating or fluttering movements are filtered out. First, set the cutoff frequency for the filter and define how strongly your data will be smoothed. When the cutoff frequency is set high, only high-frequency signals are filtered. When the cutoff frequency is low, trajectory signals at a lower frequency range will also be filtered. In other words, a low cutoff frequency setting will smooth most of the transitioning trajectories, whereas high cutoff frequency setting will smooth only the fluttering trajectories. High-frequency data are present during sharp transitions, and this can also be introduced by signal noise. Commonly used ranges for Filter Cutoff Frequency are between 7 Hz to 12 Hz, but you may want to adjust the value higher for fast and sharp motions to avoid softening motion transitions need to stay sharp.\
This tool is used for quickly deleting any marker trajectories that exist only for a few frames. Markers that appear only momentarily are likely happening due to noise in the data. If you wish to clean up these short-lived trajectories to further clean up the data, the fragments tool can be used. You will just need to set the minimum frame percentage under the settings. Then, when you click delete, individual marker trajectories that are shorter than the percentage defined will be deleted.
In some cases, marker labels may be swapped during capture. Swapped labels can result in erratic orientation changes, or crooked Skeletons, but they can be corrected by re-labeling the markers. The Swap Fix feature in the Edit Tools can be used to correct obvious swaps that persist through the capture. Select two markers that have their labels swapped, and select the frame range that you wish to edit. Find Previous and Find Next buttons allow you to navigate to the frame where their position have been changed. If a frame range is not specified, the change will be applied from current frame forward. Finally, switch the marker labels by clicking on the Apply Swap button. As long as both labels are present in the frame and only correction needed is to change the labels, the Swap Fix tool could be utilized to make corrections.
Solved Data: After editing marker data in a recorded Take, corresponding Solved Data must be updated.
Tracking data can be exported into the C3D file format. C3D (Coordinate 3D) is a binary file format that is widely used especially in biomechanics and motion study applications. Recorded data from external devices, such as force plates and NI-DAQ devices, will be recorded within exported C3D files. Note that common biomechanics applications use a Z-up right-hand coordinate system, whereas Motive uses a Y-up right-hand coordinate system. More details on coordinate systems are described in the later section. Find more about C3D files from https://www.c3d.org.
General Export Options
Frame Rate
Number of samples included per every second of exported data.
Start Frame
End Frame
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
C3D Specific Export Options
Use Zero Based Frame Index
C3D specification defines first frame as index 1. Some applications import C3D files with first frame starting at index 0. Setting this option to true will add a start frame parameter with value zero in the data header.
Export Unlabeled Markers
Includes unlabeled marker data in the exported C3D file. When set to False, the file will contain data for only labeled markers.
Export Finger Tip Markers
Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking (e.g. Baseline + 11 Additional Markers + Fingers (54))
Use Timecode
Includes timecode.
Rename Unlabeled As _000X
Unlabeled markers will have incrementing labels with numbers _000#.
Marker Name Syntax
Choose whether the marker naming syntax uses ":" or "_" as the name separator. The name separator will be used to separate the asset name and the corresponding marker name in the exported data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel or MarkerLabel).
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (front/back), the y-axis represents the mediolateral axis (left/right), and the z-axis represents the longitudinal axis (up/down).
MotionBuilder Compatible Axis Convention
This is a preset convention for exporting C3D files for use in Autodesk MotionBuilder. Even though Motive and MotionBuilder both use the same coordinate system, MotionBuilder assumes biomechanics standards when importing C3D files (negative X axis to positive X axis; positive Z to positive Y; positive Z to positive Y). Accordingly, when exporting C3D files for MotionBuilder use, set the Axis setting to MotionBuilder Compatible, and the axes will be exported using the following convention:
Motive: X axis → Set to negative X → Mobu: X axis
Motive: Y axis → Set to positive Z → Mobu: Y axis
Motive: Z axis → Set to positive Y → Mobu: Z axis
There is an known behavior where importing C3D data with timecode doesn't accurately show up in MotionBuilder. This happens because MotionBuilder sets the subframe counts in the timecode using the playback rate inside MotionBuilder instead of using the rate of the timecode. When this happens you can set the playback rate in MotionBuilder to be the same as the rate of the timecode generator (e.g. 30 Hz) to get correct timecode. This happens only with C3D import in MotionBuilder, FBX import will work fine without the change to the playback rate.
Captured tracking data can be exported in Comma Separated Values (CSV) format. This file format uses comma delimiters to separate multiple values in each row, and it can be imported by spreadsheet software or a programming script. Depending on which data export options are enabled, exported CSV files can contain marker data, Rigid Body data, and/or Skeleton data. CSV export options are listed in the following charts:
General Export Options
Frame Rate
Number of samples included per every second of exported data.
Start Frame
End Frame
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
CSV Export Options
Markers
Enabling this option includes X/Y/Z reconstructed 3D positions for each marker in exported CSV files.
Unlabeled Markers
Enableing this option includes tracking data of all of the unlabeled makers to the exported CSV file along with other labeled markers. If you just want to view the labeled marker data, you can turn off this export setting.
Rigid Bodies
When this option is set to true, exported CSV file will contain 6 Degree of Freedom (6 DoF) data for each rigid body from the Take. 6 DoF data contain orientations (pitch,roll, and yaw in the chosen rotation type as well as 3D positions (x,y,z) of the rigid body center.
Rigid Body Markers
Enabling this option includes 3D position data for each Marker Constraints locations (not actual marker location) of rigid body assets. Compared to the positions of the raw marker positions included within the Markers columns, the Rigid Body Markers show the solved positions of the markers as affected by the rigid body tracking but not affected by occlusions.
Bones
When this option is set to true, exported CSV files will include 6 DoF data for each bone segment of skeletons in exported Takes. 6 DoF data contain orientations (pitch, roll, and yaw) in the chosen rotation type, and also 3D positions (x,y,z) for the center of the bone.
Bone Markers
Enabling this option will include 3D position data for each Marker Constraints locations (not actual marker location) of bone segments in skeleton assets. Compared to the real marker positions included within the Markers column, the Bone Markers show the solved positions of the markers as affected by the skeleton tracking but not affected by occlusions.
Header information
Includes detailed information about capture data as a header in exported CSV files. Types of information included in the header section is listed in the following section.
Rotation Type
Rotation type determines whether Quaternion or Euler Angles are used for orientation convention in exported CSV files. For Euler rotation, right-handed coordinate system is used and all different orders (XYZ, XZY, YXZ, YZX, ZXY, ZYX) of elemental rotation are available. More specifically, the XYZ order indicates pitch is degree about the X axis, yaw is degree about the Y axis, and roll is degree about the Z axis.
Device Data
When set to True, separate CSV files for recorded device data will be exported. This includes force plate data and analog data from NI-DAQ devices. A CSV file will be exported for each device included in the Take.
Use World Coordinates
This option decides whether exported data will be based on world (global) or local coordinate systems.
Rigid Body markers or Skeleton bone markers are referred to as Marker Constraints. They appear as transparent spheres within a Rigid Body, or a Skeleton, and each sphere reflect the position that a Rigid Body, or a Skeleton, expects to find a 3D marker. When the asset definitions are created, it is assumed that the markers are fixed at the same location and do not move over the course of capture.
In the CSV file, Rigid Body markers have a physical marker column and a Marker Constraints column. They have nearly the same ID but are distinguished by the first 8 characters as uniquely identifiable.
When a marker is occluded in Motive, the Marker Constraints will display the last known position of where it thinks the marker should be in the CSV file. The actual physical marker will display a blank cell or null value since Motive cannot account for its actual location due to its occlusion.
When the header is disabled, this information will be excluded from the CSV files. Instead, the file will have frame IDs in the first column, time data on the second column, and the corresponding mocap data in the remaining columns.
CSV Headers
1st row
General information about the Take and export settings. Included information are: format version of the CSV export, name of the TAK file, the captured frame rate, the export frame rate, capture start time, number of total frames, rotation type, length units, and coordinate space type.
2nd row
Empty
3rd row
4th row
Includes marker or asset labels for each corresponding data set.
5th row
Displays marker ID.
6th and 7th row
Shows which data is included in the column: rotation or position and orientation on X/Y/Z.
TIP: Occlusion in the marker data
When there is an occlusion of a marker, the CSV file will contain blank cells. This can interfere when running a script to process the CSV data. It is recommended to optimize the system setup to reduce occlusions. To omit unnecessary frame ranges with frequent marker occlusions, select the frame range with the most complete tracking results. Another solution to this is to use Fill Gaps to interpolate missing trajectories in post-processing.
For Takes containing force plates (AMTI or Bertec) or data acquisition (NI-DAQ) devices, additional CSV files will be exported for each connected device. For example, if you have two force plates and a NI-DAQ device in the setup, a total 4 CSV files will be created when you export the tracking data from Motive. Each of the exported CSV files will contain basic properties and settings in its header (if Header Information is selected), including device information and sample counts. Also, the mocap frame rate to device sampling rate ratio is included since force plate and analog data are sampled at higher sampling rates.
Since device data is usually sampled at a higher rate than the camera system, the camera samples are collected at the center of the corresponding device data samples. For example, if the device data has 9 sub-frames for each camera frame sample, the camera tracking data will be recorded at every 5th frame of device data.
Force Plate Data: Each of the force plate CSV files will contain basic properties such as platform dimensions and mechanical-to-electrical center offset values. The mocap frame number, force plate sample number, forces (Fx/Fy/Fz), moments (Mx, My, Mz), and location of the center of pressure (Cx, Cy, Cz) will be listed below the header.
Analog Data: Each of the analog data CSV files contains analog voltages from each configured channel.
Hotkeys can be viewed and customized from the Application Settings panel. The below chart lists only the commonly used hotkeys. There are also other hotkeys and unassigned hotkeys, which are not included in the chart below. For a complete list of hotkey assignments, please check the Application Settings in Motive.
File
Open File (TTP, CAL, TAK, TRA, SKL)
CTRL + O
Save Current Take
CTRL + S
Save Current Take As
CTRL + Shift + S
Export Tracking Data from current (or selected) TAKs
CTRL + Shift + Alt + S
Basic
Toggle Between Live/Edit Mode
Shift + ~
Record Start / Playback start
Space Bar
Select All
CTRL + A
Undo
Ctrl + Z
Redo
Ctrl + Y
Cut
Ctrl + X
Paste
Ctrl + V
Layout
Calibrate Layout
Ctrl+1
Create Layout
Ctrl+2
Capture Layout
Ctrl+3
Edit Layout
Ctrl+4
Custom Layout [1...]
Ctrl+[5...9], Shift[1...9]
Perspective View Pane (3D)
Switch selected viewport to 3D perspective view.
1
Switch selected viewport to 2D camera view.
2
Show view angle from a selected camera or a Rigid Body
3
Open single viewport
Shift + 1
Open two viewports; splited horizontally.
Shift + 2
Open two viewports; splited vertically.
Shift + 3
Open four viewports.
Shift + 4
Perspective View Pane (3D)
Follow Selected
G
Zoom to Fit Selection
F
Zoom to Fit All
Shift + F
Reset Tracking
Crtl+R
"
Shift + "
Jog Timeline
Alt + Left Click
Create Rigid Body From Selected
Ctrl+T
Refresh Skeleton Asset
Ctrl + R with a Skeleton asset selected
T
Toggle Labeling Mode
D
Select Mode
Q
Translation Mode
W
Rotation Mode
E
Scale Mode
R
Camera Preview (2D)
Video Modes
U: Grayscale Mode
I: MJPEG Mode
O: Object Mode
Data Management Pane
Remove or Delete Session Folders
Delete
Remove Selected Take
Delete
paste shots as empty take from clipboard
Ctrl+V
Timeline / Graph View
Toggle Live/Edit Mode
~
Again+
+
Live Mode: Record
Space
Edit Mode: Start/stop playback
Space
Rewind (Jump to the first frame)
Ctrl + Shift + Left Arrow
PageTimeBackward (Ten Frames)
Down Arrow
StepTimeBackward (One Frame)
Left Arrow
StepTimeForward (One Frame)
Right Arrow
PageTimeForward (Ten Frames)
Up Arrow
FastForward (Jump to the last frame)
Ctrl + Shift + Right Arrow
To next gapped frames
Z
To previous gapped frames
Shift + Z
Graph View - Delete Selected Keys in 3D data
Delete when frame range is selected
Show All
Shift + F
Frame To Selected
F
Zoom to Fit All
Shift + F
Editing / Labeling Workflow
Apply smoothing to selected trajectory
X
Apply cubic fit to the gapped trajectory
C
Toggle Labeling Mode
D
To next gapped frame
Z
To previous gapped frame
Shift + Z
Enable/Disable Asset Editing
T
Select Mode
Q
Translation Mode
W
Rotation Mode
E
Scale Mode
R
Delete selected keys
DELETE
It is heavily recommended that you use another audio capture software with timecode to capture and synchronize audio data. Audio capture in Motive is for reference only and is not intended to perfectly align to video or motion capture data.
Take scrubbing is not supported to align with audio recorded within Motive. If you would like the audio to be closely in reference to video and motion capture data, you must play the take from the beginning.
Recorded “Take” files with audio data will play back sound and may be exported into WAV audio files. This page details audio capture recommendations and instructions for recording and playing back audio in Motive.
Confirmed Devices
For the users who needs to use this feature, it's recommended to use one of the below devices that has been confirmed to work:
AT2020 USB microphone
mixPre-3
In Motive, open the Audio tab of the Settings window, then enable the “Capture” property.
Select the audio input device that you would like to use.
Make noise to confirm the microphone is working with the level visual.
Make sure the “Device Format” of the recording device matches the “Device Format” that will be used for playback (speakers and headsets).
Start capturing data.
In Motive, open a Take that includes audio data.
Open the Audio tab of the Settings window, then enable the “Playback” property.
Select the audio output device that you will be using.
Make sure the configurations in Device Format closely matches the Take Format.
Play the Take.
In order to playback audio recordings in Motive, the audio format of recorded data MUST closely match the audio format used by the output device. Specifically, the number of channels and frequency (Hz) of the audio must match. Otherwise, recorded sound will not be played back.
The recorded audio format is determined when a take is first recorded. The recorded data format and the playback format may not always agree by default. In this case, the windows audio settings will need to be adjusted to match the take.
Audio capture within Motive, does not natively synchronize to video or motion capture data and is intended for reference audio only. If you require synchronization, please use an external device and software with timecode. See below for suggestions for External Audio Capture.
A device's audio format can be configured under the Sound settings found in the Control Panel. To do this select the recording device, click on Properties, then the default format can be changed under the Advanced Tab as shown in the image below.
Recorded audio files can be exported into WAV format. To export, right-click on a Take from the Data pane and select Export Audio option in the context menu.
There are a variety of different programs and hardware that specialize in audio capture. A not very exhaustive list of examples can be seen below.
Tentacle Sync TRACK E
Adobe Premiere
Avid Media Composer
Etc...
In order to capture audio using a different program, you will need to connect both the motion capture system (through the eSync) and the audio capture device to timecode data (and possibly genlock data). You can then use the timecode information to synchronize the two sources of data for your end product.
For more information on synchronizing external devices, read through the Synchronization page.
The following devices are internally tested and should work for most use cases for reference audio only:
AT2020 USB
MixPre-3 II Digital USB Preamp
Motive can export tracking data in BioVision Hierarchy (BVH) file format. Exported BVH files do not include individual marker data. Instead, a selected skeleton is exported using hierarchical segment relationships. In a BVH file, the 3D location of a primary skeleton segment (Hips) is exported, and data on subsequent segments are recorded by using joint angles and segment parameters. Only one skeleton is exported for each BVH file, and it contains the fundamental skeleton definition that is required for characterizing the skeleton in other pipelines.
Notes on relative joint angles generated in Motive: Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis.
General Export Options
Frame Rate
Number of samples included per every second of exported data.
Start Frame
End Frame
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset conventions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
BVH Specific Export Options
Single Joint Torso
When this is set to true, there will be only one skeleton segment for the torso. When set to false, there will be extra joints on the torso, above the hip segment.
Hands Downward
Sets the exported skeleton base pose to use hands facing downward.
MotionBuilder Names
Sets the name of each skeletal segment according to the bone naming convention used in MotionBuilder.
Skeleton Names
Set this to the name of the skeleton to be exported.
This page provides basic description of marker labels and instructions on labeling workflow in Motive.
Marker Label
Marker labels are basically software name tags that are assigned to trajectories of reconstructed 3D markers so that they can be referenced for tracking individual markers, Rigid Bodies, or Skeletons. Motive identifies marker trajectories using the assigned labels. Labeled trajectories can be exported individually, or combined together to compute positions and orientations of the tracked objects. In most applications, all of the target 3D markers will need to be labeled in Motive. There are two methods for labeling markers in Motive: auto-labeling and manual labeling, and both labeling methods will be covered in this page.
Solved Data: After editing marker data in a recorded Take, corresponding Solved Data must be updated.
Monitoring Labels
Labeled or unlabeled trajectories can be identified and resolved from the following places in Motive:
3D Perspective Viewport: From the 3D viewport, check the Marker Labels in the visual aids option to view marker labels for selected markers.
Labels pane: The Labels pane lists out all of the marker labels and corresponding percentage gap for each label. The color of the label also indicates whether if the label is present or missing at the current frame.
Graph View pane: For frames where the selected label is not assigned to any markers, the timeline scrubber gets highlighted in red. Also, the tracks view of this pane provides a list of labels and their continuity in a captured Take.
There are two approaches to labeling markers in Motive:
Auto-label pipeline: Automatically label sets of Rigid Body markers and Skeleton markers using calibrated asset definitions.
Manual Label: Manually label individual markers using the Labels pane.
For tracking Rigid Bodies and Skeletons, Motive can use the asset definitions to automatically label associated markers both in real-time and post-processing. The auto-labeler uses references assets that are enabled, or assets that are checked in the Assets pane, to search for a set of markers that matches with the definition and assign pre-defined labels throughout the capture.
There are times, however, when it is necessary to manually label a section or all of a trajectory, either because the markers of a Rigid Body or a Skeleton were misidentified (or unidentified) during capture or because individual markers need to be labeled without using any tracking assets. In these cases, the Labels pane in Motive is used to perform manual labeling of individual trajectories. Manual labeling workflow is supported only in post-processing of capture when a Take file (TAK) has been loaded with 3D data as its playback type. In case of 2D data only capture, the Take must be Reconstructed first in order to assign, or edit, the marker labels in its 3D data. This manual labeling process, along with 3D data editing is typically referred to as post processing of mocap data.
Rigid body and Skeleton asset definitions contain information of marker placements on corresponding assets. This is recorded when the assets are first created, and the auto-labeler in Motive uses them to label a set of reconstructed 3D trajectories that resemble marker arrangements of active assets. Once all of the markers on active assets are successfully labeled, corresponding Rigid Bodies and Skeletons get tracked in the 3D viewport.
The auto-labeler runs in real-time during Live mode and the marker labels get saved onto the recorded TAKs. Running the auto-labeler again in post-processing will basically attempt to label the Rigid Body and Skeleton markers again from the 3D data.
From Data pane
Select Takes from the Data pane
Right-click to bring up the context menu
Click reconstruct and auto-label' to process selected Takes. The this pipeline will create a new set of 3D data and auto-label the markers from it.
This will label all the markers that matches the corresponding asset definition.
Note: Be careful when reconstructing a Take again either by Reconstruct or Reconstruct and Auto-label, because it will overwrite the 3D data and any post-processing edits on trajectories and marker labels will be discarded. Also, for Takes involving Skeleton assets, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again if the Skeletons are never in well-trackable poses throughout the captured Take. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
Marker Set is a list of labels, or marker names, that can be manually assigned to unlabeled markers. This can be created when there is a need to label individual markers in the scene that are not associated with a Rigid Body nor a Skeleton asset.
Labels in the Marker Set, Rigid Body, and Skeleton assets are managed using the Constraints pane. Please refer to the Constraints pane to see how to add and/or modify marker labels. Once the labels are added, the Labels pane can be used to assign them onto markers.
Read more at Constraints pane page.
\
The Labels pane is used to assign, remove, and edit marker labels in the 3D data. The Tracks View under the Graph View pane can be used in conjunction with the Labels pane to monitor which markers and gaps are associated. The Labels pane is also used to examine the number of occluded gaps in each label, and it can be used along with the Editing Tools for complete post-processing.
For a given frame, all labels are color-coded. For each frame of 3D data, assigned marker labels are shown in white, labels without reconstructions are shown in red, and unlabeled reconstructions are shown in orange; similar to how they are presented in the 3D View.
See the Labels pane page for detailed explanation on each option.
The QuickLabel mode allows you to tag labels with single-clicks in the view pane, and it is a handy way to reassign or modify marker labels throughout the capture. When the QuickLabel mode is toggled, the mouse cursor switches to a finger icon with the selected label name attached next to it. Also, when the display label option is enabled in the perspective view, all of assigned marker labels will be displayed next to each marker in the 3D viewport, as shown in the image below. Select the marker set you wish to label, and tag the appropriate labels to each marker throughout the capture.
When assigning labels using the Quick Label Mode, the labeling scope is configured from the labeling range settings. You can restrict the labeling operation to apply from the current frame backward, current frame forward, or both depending on the trajectory. You may also restrict labeling operations to apply the selected label to all frames in the Take, to a selected frame range, or to a trajectory 'fragment' enclosed by gaps or spikes. The fragment/spike setting is used by default and this best identifies mislabeled frame ranges and assigns marker labels. See the Labels pane page for details on each feature.
Under the drop-down menu in the Labels pane, select an asset you wish to label.
All of the involved markers will be displayed under the columns.
From the label list, select unlabeled or mislabeled markers.
Inspect the behavior of the selected trajectory and decide whether you want to apply the selected label to frames forward or frames backward or both. This option can be selected from labeling range settings on the Labels pane.
Hiding Marker Labels
If the marker labels are set to visible in the 3D viewport, Motive will show all of the marker labels when entering the QuickLabel mode. To hide all of the marker labels from showing up in the viewport, you can click on the visual aids option in the perspective view, and uncheck marker labels.
The following section provides the general labeling steps in Motive. Note that the labeling workflow is flexible and alternative approaches to the steps listed in this section could also be used. Utilize the auto-labeling pipelines in combination with the Labels pane to best reconstruct and label the 3D data of your capture.
Labeling Tips
Use the Graph View pane to monitor occlusion gaps and labeling errors as you post-process capture Takes
When using the Labels pane, choose the most appropriate labeling range settings (all, selected, spike, or fragment) to efficiently label selected trajectories.
Motive Hotkeys can increase the speed of the workflow. Use Z and Shift+Z hotkeys to quickly find gaps in the selected trajectory.
When working with Skeleton assets, label the hip segment first. The hip segment is the main parent segment, top of the segments hierarchy, where all other child segments are associated to. Manually assigning hip markers sometimes help the auto-labeler to label the entire asset.
For Skeleton assets, the Show Tracking Errors property can be utilized to display tracking errors on Skeleton segments.
Step 1. In the Data pane, Reconstruct and auto-label the take with all of the desired assets enabled.
Step 2. In the Graph View pane, examine the trajectories and navigate to the frame where labeling errors are frequent.
Step 3. Open the Labels pane.
Step 4. Select an asset that you wish to label.
Step 5. From the label columns, Click on a marker label that you wish to re-assign.
Step 6. Inspect behavior of a selected trajectory and its labeling errors and set the appropriate labeling settings (allowable gap size, maximum spike and applied frame ranges).
Step 7. Switch to the QuickLabel mode (Hotkey: D).
Step 8. On the Perspective View, assign the labels onto the corresponding marker reconstructions by clicking on them.
Step 9. When all markers have been labeled, switch back to the Select Mode.
Step 1. Start with 2D data of a captured Take with model assets (Skeletons and Rigid Bodies).
Step 2. Reconstruct and Auto-Label, or just Reconstruct, the Take with all of the desired assets enabled under the Assets pane. If you use reconstruct only, you can skip step 3 and 5 for the first iteration.
Step 3. Examine the reconstructed 3D data, and inspect the frame range where markers are mislabeled.
Step 4. Using the Labels pane, manually fix/assign marker labels, paying attention to your label settings (direction, max gap, max spike, selected duration).
Step 5. Unlabel all trajectories you want to re-auto-label.
Step 6. Auto-Label the Take again. Only the unlabeled markers will get re-labeled, and all existing labels will be kept the same.
Step 7. Re-examine the marker labels. If some of the labels are still not assigned correctly from any of the frames, repeat the steps 3-6 until complete.
The general process for resolving labeling error is:
Identify the trajectory with the labeling error.
Determine if the error is a swap, an occlusion, or unlabeled.
Resolve the error with the correct tool.
Swap: Use the Swap Fix tool ( Edit Tools ) or just re-assign each label ( Labels panel ).
When manually labeling markers to fix swaps, set appropriate settings for the labeling direction, max spike, and selected range settings.
Occlusion: Use the Gap Fill tool ( Edit Tools ).
Unlabeled: Manually label an unlabeled trajectory with the correct label ( Labels panel ).
For more data editing options, read through the Data Editing page.
This page explains different types of captured data in Motive. Understanding these types is essential in order to fully utilize the data-processing pipelines in Motive.
There are three different types of data: 2D data, 3D data, and Solved data. Each type of data will be covered in detail throughout this page, but basically, 2D Data is the captured camera frame data, 3D Data is the reconstructed 3-dimensional marker data, and Solved data is the calculated positions and orientations of Rigid Bodies and Skeleton segments.
Motive saves tracking data into a Take file (TAK extension), and when a capture is initially recorded, all of the 2D data, real-time reconstructed 3D data, and solved data are saved onto a Take file. Recorded 3D data can be post-processed further in Edit mode, and when needed, a new set of 3D data can be re-obtained from saved 2D data by performing the reconstruction pipelines. From the 3D data, Solved data can be derived.
Available data types are listed on the Data pane. When you open up a Take in Edit mode, the loaded data type will be highlighted at the top-left corner of the 3D viewport. If available, 3D Data will be loaded first by default, and the 2D data can be accessed by entering the 2D Mode from the Data pane.
2D data is the foundation of motion capture data. It mainly includes the 2D frames captured by each camera in a system.
Images in recorded 2D data depend on the image processing mode, also called the video type, of each camera that was selected at the time of the capture. Cameras that were set to reference modes (MJPEG grayscale images) record reference videos, and cameras that were set to tracking modes (object, precision, segment) record 2D object images which can be used in the reconstruction process. The latter 2D object data contains information on x and y centroid positions of the captured reflections as well as their corresponding sizes (in pixels) and roundness, as shown in the below images.
Using the 2D object data along with the camera calibration information, 3D data is computed. Extraneous reflections that fail to satisfy the 2D object filter parameters (defined under application settings) get filtered out, and only the remaining reflections are processed. The process of converting 2D centroid locations into 3D coordinates is called Reconstruction, which will be covered in the later section of this page.
3D data can be reconstructed either in real-time or in post-capture. For real-time capture, Motive processes captured 2D images on a per-frame basis and streams the 3D data into external pipelines with extremely low processing latency. For recorded captures, the saved 2D data can be used to create a fresh set of 3D data through post-processing reconstruction, and any existing 3D data will be overwritten with the newly reconstructed data.
Contains 2D frames, or 2D object information captured by each camera in a system. 2D data can be monitored from the Camera Preview pane.
Recorded 2D data can be reconstructed and auto-labeled to derive the 3D data.
3D tracking data is not computed yet. The tracking data can be exported only after reconstructing the 3D data.
In playback of recorded 2D data, 3D data will be Live-reconstructed into 3D data and reported in the 3D viewport.
3D data contains 3D coordinates of reconstructed markers. 3D markers get reconstructed from 2D data and shows up the perspective view. Each of their trajectories can be monitored in the Graph pane. In recorded 3D data, marker labels can be assigned to reconstructed markers either through the auto-labeling process using asset definitions or by manually assigning it. From these labeled markers, Motive solves the position and orientation of Rigid Bodies and Skeletons.
Recorded 3D data is editable. Each frame of the trajectory can be deleted or modified. The post-processing edit tools can be used to interpolate the missing trajectory gaps or apply the smoothing, and the labeling tools can be used to assign or reassign the marker labels.
Lastly, from a recorded 3D data, its tracking data can be exported into various file formats — CSV, C3D, FBX, and more.
Reconstructed 3D marker positions.
Marker labels can be assigned.
Assets are modeled and the tracking information is available.
Edit tools can be used to fill the trajectory gaps.
Solved data is positional and rotational, 6 degrees of freedom (DoF), tracking data of Rigid Bodies and Skeletons. After a take has been recorded, you will need either select Solve all Assets by right clicking on a Take in the Data pane, or right click on the asset in the Assets pane and select Solve while in Edit mode. Takes that contain solved data will be indicated under the solved column.
Recorded 2D data, audio data, and reference videos can be deleted from a Take file. To do this, open the Data pane, right-click on a recorded Take(s), and click the Delete 2D Data from the context menu. Then, a dialogue window will pop-up, asking which types of data to delete. After removing the data, a backup file will be archived into a separate folder.
Deleting 2D data will significantly reduce the size of the Take file. You may want to delete recorded 2D data when there is already a final version of reconstructed 3D data recorded in a Take and the 2D data is no longer needed. However, be aware that deleting 2D data removes the most fundamental data from the Take file. After 2D data has been deleted, the action cannot be reverted, and without 2D data, 3D data cannot be reconstructed again.
Recorded 3D data can be deleted from the context menu in the Data pane. To delete 3D data, right-click on selected Takes and click Delete 3D data, and all reconstructed 3D information will be removed from the Take. When you delete the 3D data, all edits and labeling will be deleted as well. Again, a new 3D data can always be reacquired by reconstructing and auto-labeling the Take from 2D data.
Deleting 3D data for a single _Take_
When frame range is not selected, it will delete 3D data from the entire frame. When a frame range is selected from the Timeline Editor, this will delete 3D data in the selected ranges only.
Deleting 3D data for multiple _Takes_
When multiple Takes are selected from the Data pane, deleting 3D data will remove 3D data from all of the selected Takes. This will remove 3D data from the entire frame ranges.
When a Rigid Body or Skeleton exists in a Take, Solved data can be recorded. From the Assets pane, right-click one or more asset and select Solve from the context menu to calculate the solved data. To delete, simply click Remove Solve.
Assigned marker labels can be deleted from the context menu in the Data pane. The Delete Marker Labels feature removes all marker labels from the 3D data of selected Takes. All markers will become unlabeled.
Deleting labels for a single _Take_
When no frame range is selected, it will unlabel all markers from all Takes. When a frame range is selected from the Timeline Editor, this will unlabel markers in the selected ranges only.
Deleting labels for multiple _Takes_
Even when a frame range is selected from the timeline, it will unlabel all markers from all frame ranges of the selected Takes.
A Motive Body license can export tracking data into FBX files for use in other 3D pipelines. There are two types of FBX files: Binary FBX and ASCII FBX.
Notes for MotionBuilder Users
When exporting tracking data into MotionBuilder in the FBX file format, make sure the exported frame rate is supported in MotionBuilder (Mobu). In Mobu, there is a select set of playback frame rate that are supported, and the rate of the exported FBX file must agree in order to play back the data properly.
If there is a non-standard frame rate selected that is not supported, the closest supported frame rate is applied.
For more information, please visit Autodesk Motionbuilder's Documentation Support site.
Autodesk has discontinued support for FBX ASCII import in MotionBuilder 2018 and above. For alternatives when working in MotionBuilder, please see the Autodesk MotionBuilder: OptiTrack Optical Plugin page.
Exported FBX files in ASCII format can contain reconstructed marker coordinate data as well as 6 Degree of Freedom data for each involved asset depending on the export setting configurations. ASCII files can also be opened and edited using text editor applications.
FBX ASCII Export Options
Frame Rate
Number of samples included per every second of exported data.
Start Frame
End Frame
Scale
Apply scaling to the exported tracking data.
Units
Set the unit in exported files.
Use Timecode
Includes timecode.
Export FBX Actors
Includes FBX Actors in the exported file. Actor is a type of asset used in animation applications (e.g. MotionBuilder) to display imported motions and connect to a character. In order to animate exported actors, associated markers will need to be exported as well.
Optical Marker Name Space
Overrides the default name spaces for the optical markers.
Marker Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Markers
Export each marker coordinates.
Unlabeled Markers
Includes unlabeled markers.
Calculated Marker Positions
Export asset's constraint marker positions as the optical marker data.
Interpolated Fingertips
Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking.
Marker Nulls
Exports locations of each marker.
Export Skeleton Nulls
Rigid Body Nulls
Binary FBX files are more compact than ASCII FBX files. Reconstructed 3D marker data is not included within this file type, but selected Skeletons are exported by saving corresponding joint angles and segment lengths. For Rigid Bodies, positions and orientations at the defined Rigid Body origin are exported.
FBX Binary Export Options
Frame Rate
Number of samples included per every second of exported data.
Start Frame
End Frame
Scale
Apply scaling to the exported tracking data.
Units
Sets the unit for exported segment lengths.
Use Timecode
Includes timecode.
Export Skeletons
Skeleton Names
Names of Skeletons that will be exported into the FBX binary file.
Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Rigid Body Nulls
Rigid Body Names
Names of the Rigid Bodies to export into the FBX binary file as 6 DoF nulls.
Marker Nulls
Exports locations of each marker.
Captured tracking data can be exported into a Track Row Column (TRC) file, which is a format used in various mocap applications. Exported TRC files can also be accessed from spreadsheet software (e.g., Excel). These files contain raw output data from the capture, which include positional data of each labeled and unlabeled marker from a selected Take. Expected marker locations and segment orientation data are not included in the exported files. The header contains basic information such as file name, frame rate, time, number of frames, and corresponding marker labels. Corresponding XYZ data is displayed in the remaining rows of the file.
Once the capture volume is calibrated and all markers are placed, you are now ready to capture Takes. In this page, we will cover key concepts and tips that are important for the recording pipeline. For real-time tracking applications, you can skip this page and read through the Data Streaming page.
There are two different modes in Motive: Live mode and Edit mode. You can toggle between two modes from the Control Deck or by using the (Shift + ~) hotkey.
Live Mode
The Live mode is mainly used when recording new Takes or when streaming a live capture. In this mode, all of the cameras are continuously capturing 2D images and reconstructing the detected reflections into 3D data in real-time.
Edit Mode
The Edit Mode is used for playback of captured Take files. In this mode, you can playback, or stream, recorded data. Also, captured Takes can be post-processed by fixing mislabeling errors or interpolating the occluded trajectories if needed.
Tip: Prime series cameras will illuminate in blue when in live mode, in green when recording, and turned-off in edit mode. See more at Camera Status Indicators.
Recording in Motive is triggered from the Control Deck when in the Live mode, and the recorded data
In Motive, capture recording is controlled from the Control Deck. In the Live mode, new Take** name** can be assigned in the name box or you can just simply start the recording and let Motive automatically generate new names on the fly. You can also create empty Takes in the Data Management pane for a better organization. To start the capture, select Live mode and click the recording button (red). In the control deck, record time and frames are displayed in (Hour:Minute:Second:Frames).
Tip: For Skeleton tracking, always start and end the capture with a T-pose or A-pose, so that the Skeleton assets can be redefined from the recorded data as well.
Tip: Efficient ways of managing Takes
Always start by creating session folders for organizing related Takes. (e.g. name of the tracked subject).
Plan ahead and create a list of captures in a text file or a spreadsheet, and you can create empty takes by copying and pasting the list into the Data Management pane (e.g. walk, jog, run, jump).
Once pasted, empty Takes with the corresponding names will be imported.
Select one of the empty takes and start recording. The capture will be saved with the corresponding name.
If the capture was unsuccessful, simply record the same Take again and another one will be recorded with a incremented suffix added at the end of the given Take name (e.g. walk_001, walk_002, walk_003). The suffix format is defined in the Application Settings.
When captured successfully, select another empty Take in the list and capture the next one.
When a capture is first recorded, both 2D data and real-time reconstructed 3D data is saved onto the Take. For more details on each data type, refer to the Data Types page.
2D data: The recorded Take file includes just the 2D object images from each camera.
3D data: The recorded Take file also includes reconstructed 3D marker data in addition to 2D data.
Throughout capture, you might recognize that there are different types of markers that appear in the 3D perspective view. In order to correctly interpret the tracking data, it is important to understand the differences between these markers. There are three different displayed marker types: markers, Rigid Body markers, and bone (or Skeleton) markers.
Marker data, labeled or unlabeled, represent the 3D positions of markers. These markers do not present Rigid Body or Skeleton solver calculations but locate the actual marker position calculated from the camera data. These markers are represented as a solid sphere in the viewport. By default, unlabeled markers are colored in white, and labeled markers will have colors that reflect the color setting in the Rigid Body or the corresponding bone.
Labeled Marker Colors:
Colors of the unlabeled markers can be changed from the Application Settings.
Colors of the Rigid Body labeled markers can be changed from the properties of the corresponding asset.
Colors of the markers can be changed from the Constraints XML file if needed.
Rigid Body markers or Skeleton bone markers are referred to as Marker Constraints. They appear as transparent spheres within a Rigid Body, or a Skeleton, and each sphere reflect the position that a Rigid Body, or a Skeleton, expects to find a 3D marker. When the asset definitions are created, it is assumed that the markers are fixed at the same location and does not move over the course of capture.
In order to view Marker Constraints, both the Marker Constraints visual aid option in the viewport and the Marker Constraints property on the corresponding asset must be enabled. This is enabled by default for Skeleton assets but this must be enabled for Rigid Bodies to view them. When the Rigid Body solver or Skeleton solver are tracking from the 3D markers, the marker reconstructions and Marker Constraints positions will closely align in the viewport.
For Rigid Body assets, when their asset definition is created, it expects the markers to be fixed in the same location and the object does not deform over the course of capture. Each Rigid Body is given a acceptable deflection property value. As long as the actual marker position is within the allowable deflection from the Marker Constraints position, the marker will be labeled. For Skeleton assets, as the body segments are not perfectly rigid, some amount of offset from the model marker position is allowed.
Various types of files, including the tracking data, can be exported out from Motive. This page provides information on what file formats can be exported from Motive and instructions on how to export them.
Once captures have been recorded into Take files and the corresponding 3D data have been reconstructed, tracking data can be exported from Motive in various file formats.
Exporting Tracking Data
Reconstruction is required to export Marker data, Auto-label is required when exporting Markers labeled from Assets, and Solving is required prior to exporting Assets.
If the recorded Take includes Rigid Body or Skeleton trackable assets, make sure all of the Rigid Bodies and Skeletons are Solved prior to exporting. The solved data will contain positions and orientations of each Rigid Body and Skeleton. If changes have been made to either the Rigid Body or Skeleton, you will need to solve the assets again prior to exporting.
Please note that if you have Assets that are unsolved and just wish to export reconstructed Marker data, you can toggle off Rigid Bodies and Bones (Skeletons) from the Export window (see image below).
In the export dialog window, the frame rate, the measurement scale, and the frame range of exported data can be configured. Additional export settings are available for each export file formats. Read through below pages for details on export options for each file format:
Exporting a Single Take
Step 1. Open and select a Take to export from the Data pane. The selected Take must contain reconstructed 3D data.
Step 2. Under the File tab on the command bar, click File → Export Tracking Data. This can also be done by right-clicking on a selected Take from the Data pane and clicking Export Tracking Data from the context menu.
Step 3. On the export dialogue window, select a file format and configure the corresponding export settings.
To export the entire frame range, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export a specific frame range, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 4. Click Save.
Working Range:
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range will recorded tracking data be played back and shown on the graphs. This range can also be used to output specific frame ranges when exporting tracking data from Motive.
The working range can be set from the following places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range. See: Graph View pane page.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Exporting Multiple Takes
Step 1. Under the Data pane, shift + select all the Takes that you wish to export.
Step 2. Right-click on the selected Takes and click Export Tracking Data from the context menu.
Step 3. An export dialogue window will show up for batch exporting tracking data.
Step 4. Select the desired output format and configure the corresponding export settings.
Step 5. Select frame ranges to export under the Start Frame and the End Frame settings. You can either export entire frame ranges or specified frame ranges on all of the Takes. When exporting specific ranges, desired working ranges must be set for each respective Takes.
To export entire frame ranges, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export specific frame ranges, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 6. Click Save.
Motive Batch Processor:
Exporting multiple Take files with specific options can also be done through a Motive Batch Processor script. For example, refer to FBXExporterScript.cs script found in the MotiveBatchProcessor folder.
Motive exports reconstructed 3D tracking data in various file formats and exported files can be imported into other pipelines to further utilize capture data. Available export formats include CSV, C3D, FBX, BVH, and TRC. Depending on which options are enabled, exported data may include reconstructed marker data, 6 Degrees of Freedom (6 DoF) Rigid Body data, or Skeleton data. The following chart shows what data types are available in different export formats:
Reconstructed 3D Marker Data
•
•
•
•
6 Degrees of Freedom
Rigid Body Data
•
•
Skeleton Data
•
•
•
CSV and C3D exports are supported in both Motive Tracker and Motive Body licenses. FBX, BVH, and TRC exports are only supported in Motive Body.
A calibration definition of a selected take can be exported from the Export Camera Calibration under the File tab. Exported calibration (CAL) files contain camera positions and orientations in 3D space, and they can be imported in different sessions to quickly load the calibration as long as the camera setup is maintained.
Read more about calibration files under the Calibration page.
Assets can be exported into the Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that contains various configuration settings in Motive, including the asset definitions.
When an asset definition is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to the File menu and select Export Assets to export all of the assets in the Live-mode or in the current TAK file(s). You can also use File → Export Profile to export other software settings including the assets.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
C3D Export: Both mocap data and the analog data will be exported onto a same C3D file. Please note that all of the analog data within the exported C3D files will be logged at the same sampling frequency. If any of the devices are captured at different rates, Motive will automatically resample all of the analog devices to match the sampling rate of the fastest device. More on C3D files: https://www.c3d.org/
CSV Export: When exporting tracking data into CSV, additional CSV files will be exported for each of the NI-DAQ devices in a Take. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. The voltage amplitude of each analog channel will be listed. Also, mocap frame rate to device sampling ratio is included since analog data is usually sampled at higher sampling rates.
Note
The coordinate system used in Motive (y-up right-handed) may be different from the convention used in the biomechanics analysis software.
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
When there is an MJPEG reference camera in a Take, its recorded video can be exported into an AVI file or into a sequence of JPEG files. The Export Video option is located under the File tab or you can also right-click on a TAK file from the Data pane and export from there. At the bottom of the export dialog, the frame rate of the exported AVI file can be set to a full-frame rate or down-sampled to half, quarter, 1/8, or 1/16 ratio framerate. You can also adjust the playback speed to export a video with a slower or faster playback speed. The captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Read more about recording reference videos on Data Recording page.
Reference Video Type: Only compressed MJPEG reference videos can be recorded and exported from Motive. Export for raw grayscale videos is not supported.
Media Player: The exported videos may not be playable on Windows Media Player, please use a more robust media player (e.g. VLC) to play the exported video files.
When a recorded capture contains audio data, an audio file can be exported through the Export Audio option on the File menu or by right-clicking on a Take from the Data pane.
Skeletal marker labels for Skeleton assets can be exported as XML files (example shown below) from the Data pane. The XML files can be imported again to use the stored marker labels when creating new Skeletons.
For more information on Skeleton XML files, read through the Skeleton Tracking page.
Sample Skeleton Label XML File
This page provides information and instructions on how to utilize the Probe Measurement Kit.
Measurement probe tool utilizes the precise tracking of OptiTrack mocap systems and allows you to measure 3D locations within a capture volume. A probe with an attached Rigid Body is included with the purchased measurement kit. By looking at the markers on the Rigid Body, Motive calculates a precise x-y-z location of the probe tip, and it allows you to collect 3D samples in real-time with sub-millimeter accuracy. For the most precise calculation, a probe calibration process is required. Once the probe is calibrated, it can be used to sample single points or multiple samples to compute distance or the angle between sampled 3D coordinates.
Measurement kit includes:
Measurement probe
Calibration block with 4 slots, with approximately 100 mm spacing between each point.
This section provides detailed steps on how to create and use the measurement probe. Please make sure the camera volume has been calibrated successfully before creating the probe. System calibration is important on the accuracy of marker tracking, and it will directly affect the probe measurements.
Creating a probe using the Builder pane
Open the Builder pane under View tab and click Rigid Bodies.
Bring the probe out into the tracking volume and create a Rigid Body from the markers.
Under the Type drop-down menu, select Probe. This will bring up the options for defining a Rigid Body for the measurement probe.
Select the Rigid Body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining Rigid Body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Once the probe is calibrated successfully, a probe asset will be displayed over the Rigid Body in Motive, and live x/y/z position data will be displayed under the Probe pane.
Caution
The probe tip MUST remain fitted securely in the slot on the calibration block during the calibration process.
Also, do not press in with the probe since the deformation from compressing could affect the result.
Note: Custom Probes
It's highly recommended to use the Probe kit when using this feature. With that being said, you can also use any markered object with a pivot arm to define a custom probe in Motive, but when a custom probe is used, it may have less accurate measurements; especially if the pivot arm and the object are not rigid and/or if any slight translation occurs during the probe calibration steps.
Using the Probe pane for sample collection
Under the Tools tab, open the Probe pane.
Place the probe tip on the point that you wish to collect.
Click Take Sample on the Measurement pane.
A Virtual Reference point is constructed at the location and the coordinates of the point are displayed in the Probe Pane. The points location can be exported from the Probe Pane as a .CSV file.
Collecting additional samples will provide distance and angles between collected samples.
As the samples are collected, their coordinate data will be written out into the CSV files automatically into the OptiTrack documents folder which is located in the following directory: C:\Users\[Current User]\Documents\OptiTrack. 3D positions for all of the collected measurements and their respective RMSE error values along with distances between each consecutive sample point will be saved in this file.
Also, If needed, you can trigger Motive to export the collected sample coordinate data into a designated directory. To do this, simply click on the export option on the Probe pane.
The location of the probe tip can also be streamed into another application in real-time. You can do this by data-streaming the probe Rigid Body position via NatNet. Once calibrated, the pivot point of the Rigid Body gets positioned precisely at the tip of the probe. The location of a pivot point is represented by the corresponding Rigid Body x-y-z position, and it can be referenced to find out where the probe tip is located.
The Motive Batch Processor is a separate stand-alone Windows application, built on the new NMotive scripting and programming API, that can be utilized to process a set of Motive Take files via IronPython or C# scripts. While the Batch Processor includes some example script files, it is primarily designed to utilize user-authored scripts.
Initial functionality includes scripting access to file I/O, reconstructions, high-level Take processing using many of Motive's existing editing tools, and data export. Upcoming versions will provide access to track, channel, and frame-level information, for creating cleanup and labeling tools based on individual marker reconstruction data.
Motive Batch Processor Scripts make use of the NMotive .NET class library, and you can also utilize the NMotive classes to write .NET programs and IronPython scripts that run outside of this application. The NMotive assembly is installed in the Global Assembly Cache and also located in the assemblies
sub-directory of the Motive install directory. For example, the default location for the assembly included in the 64-bit Motive installer is:
C:\Program Files\OptiTrack\Motive\assemblies\x64
The full source code for the Motive Batch Processor is also installed with Motive, at:
C:\Program Files\OptiTrack\Motive\MotiveBatchProcessor\src
You are welcome to use the source code as a starting point to build your own applications on the NMotive framework.
Requirements
A batch processor script using the NMotive API. (C# or IronPython)
Take files that will be processed.
Steps
Launch the Motive Batch Processor. It can be launched from either the start menu, Motive install directory, or from the Data pane in Motive.
First, select and load a Batch Processor Script. Sample scripts for various pipelines can be found in the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder.
Load the captured Takes (TAK) that will be processed using the imported scripts.
Click Process Takes to batch process the Take files.
Reconstruction Pipeline
When running the reconstruction pipeline in the batch processor, the reconstruction settings must be loaded using the ImportMotiveProfile method. From Motive, export out the user profile and make sure it includes the reconstruction settings. Then, import this user profile file into the Batch Processor script before running the reconstruction, or trajectorizer, pipeline so that proper settings can be used for reconstructing the 3D data. For more information, refer to the sample scripts located in the TakeManipulation folder.
A class reference in Microsoft compiled HTML (.chm) format can be found in the Help
sub-directory of the Motive install directory. The default location for the help file (in the 64-bit Motive installer) is:
C:\Program Files\OptiTrack\Motive\Help\NMotiveAPI.chm
The Motive Batch Processor can run C# and IronPython scripts. Below is an overview of the C# script format, as well as an example script.
A valid Batch Processor C# script file must contain a single class implementing the ItakeProcessingScript
interface. This interface defines a single function:
Result ProcessTake( Take t, ProgressIndicator progress )
.
Result, Take, and ProgressIndicator
are all classes defined in the NMotive
namespace. The Take object t
is an instance of the NMotive Take
class. It is the take being processed. The progress
object is an instance of the NMotive ProgressIndicator
and allows the script to update the Batch Processor UI with progress and messages. The general format of a Batch Processor C# script is:
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are multiple C# (.cs) sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your C# script file must have a '.cs' extension.
Included sample script pipelines:
ExporterScript - BVH, C3D, CSV, FBXAscii, FBXBinary, TRC
TakeManipulation - AddMarker, DisableAssets, GapFill, MarkerFilterSCript, ReconstructAutoLabel, RemoveUnlabeledMarkers, RenameAsset
IronPython is an implementation of the Python programming language that can use the .NET libraries and Python libraries. The batch processor can execute valid IronPython scripts in addition to C# scripts.
Your IronPython script file must import the clr module and reference the NMotive assembly. In addition, it must contain the following function:
def ProcessTake(Take t, ProgressIndicator progress)
The following illustrates a typical IronPython script format.
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your IronPython script file must have a '.cs' extension.
This page covers different video modes that are available on the OptiTrack cameras. Depending on the video mode that a camera is configured to, captured frames are processed differently, and only the configured video mode will be recorded and saved in Take files.
Video types, or image-processing modes, available in OptiTrack Cameras
There are different video types, or image-processing modes, which could be used when capturing with OptiTrack cameras. Dending on the camera model, the available modes vary slightly. Each video mode processes captured frames differently at both camera hardware and software level. Furthermore, precision of the capture and required amount of CPU resources will vary depending on the configured video type.
The video types are categorized into either tracking modes (object mode and precision mode) and reference modes (MJPEG and raw grayscale). Only the cameras in the tracking modes will contribute to the reconstruction of 3D data.
To switch between video types, simply right-click on one of the cameras from the 2D camera preview pane and select the desired image processing mode under the video types.
Motive records frames of only the configured video types. Video types of the cameras cannot be switched for recorded Takes in post-processing of captured data.
(Tracking Mode) Object mode performs on-camera detection of centroid location, size, and roundness of the markers, and then, respective 2D object metrics are sent to the host PC. In general, this mode is best recommended for obtaining the 3D data. Compared to other processing modes, the Object mode provides smallest CPU footprint and, as a result, lowest processing latency can be achieved while maintaining the high accuracy. However, be aware that the 2D reflections are truncated into object metrics in this mode. The Object mode is beneficial for Prime Series and Flex 13 cameras when lowest latency is necessary or when the CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: Prime/PrimeX series, Flex 13, and S250e camera models.
(Tracking Mode) Precision Mode performs on-camera detection of marker reflections and their centroids. These centroid regions of interests are sent to the PC for additional processing and determination of the precise centroid location. This provides high-quality centroid locations but is very computationally expensive and is only recommended for low to moderate camera count systems for 3D tracking when the Object Mode is unavailable.
Supported Camera Models: Flex series, Tracking Bars, S250e, Slim13e, and Prime 13 series camera models.
(Reference Mode) The MJPEG -compressed grayscale mode captures grayscale frames, compressed on-camera for scalable reference video capabilities. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. The MJPEG mode can run at full frame rate and be synchronized with tracking cameras.
Supported Camera Models: All camera models
(Reference Mode) Processes full resolution, uncompressed, grayscale images. The grayscale mode is designed to be used only for reference purposes, and processed frames will not contribute to the reconstruction of 3D data. Because of the high bandwidth associated with sending raw grayscale frames, this mode is not fully synchronized with other tracking cameras and they will run at lower frame rate. Also, raw grayscale videos cannot be exported out from a recording. Use this video mode only for aiming and monitoring the camera views for diagnosing tracking problems.
Supported Camera Models: All camera models.
Open the Devices pane and Properties pane and select one or more cameras listed. Once the selection is made, respective camera properties will be shown on the properties pane. Current video type will be shown in the Video Mode section and you can change it using the drop-down menu.
From Perspective View
In the perspective view, right-click on a camera from the viewport and set the camera to the desired video mode.
From Cameras View
In the cameras view, right-click on a camera view and change the video type for the selected camera.
Compared to object images that are taken by non-reference cameras in the system, MJPEG videos are larger in data size, and recording reference video consumes more network bandwidth. High amount data traffic can increase the system latency or cause reductions in the system frame rate. For this reason, we recommend setting no more than one or two cameras to Reference mode. Reference views can be observed from either the Camera Preview pane or by selecting Video and selecting the camera that is in MJPEG mode from the Viewport dropdown.
If Greyscale mode is selected during a recording instead of MJPEG, no reference video will be recorded and the data from that camera will display a black screen. Full greyscale is strictly used for aiming and focusing cameras.
Note:
Processing latency can be monitored from the status bar located at the bottom.
MJPEG video are used only for reference purposes, and processed frames will not contribute to reconstruction of 3D data.
The video captured by reference cameras can be monitored from the viewport. To view the reference video, select the camera that you wish to monitor, and use the Num 3 hotkey to switch to the reference view. If the camera was calibrated and capturing reference videos, 3D assets will be overlaid on top of the reference image.
The Data Streaming settings can be found by selecting the Settings cog or by selecting Edit > Settings in the Motive Toolbar/Command Bar.
Motive offers multiple options to stream tracking data onto external applications in real-time. Streaming plugins are available for Autodesk Motion Builder, The MotionMonitor, Visual3D, Unreal Engine 4, 3ds Max, Maya (VCS), VRPN, and trackd, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom clients to receive capture data. All of the listed streaming options do not require separate licenses to use. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. This page covers configuring Motive to broadcast frame data over a selected server network. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
Read through the Application Settings page for explanations on each setting. NaturalPoint Data Streaming Forum: OptiTrack Data Streaming.
While streaming, the Labeled markers setting is required to be enabled for Unlabeled markers to stream. However, if you do not wish to see Unlabeled markers, these can be toggled off so only Labeled markers are streamed. Due to legacy properties, if Labeled is disabled, then both Labeled and Unlabeled markers are disabled even if Unlabeled is toggled on.
Select the network interface address for streaming data.
Select desired data types to stream under streaming options.
When streaming Skeletons, set the appropriate bone naming convention for client application.
Check Enable at the top under the NatNet settings.
Configure streaming settings and designate the corresponding IP address from client applications
Stream live or playback captures
It is important to select the network adapter (interface, IP Address) for streaming data. Most Motive Host PCs will have multiple network adapters - one for the camera network and one (or more) for the local area network (LAN). Motive will only stream over the selected adapter (interface). Select the desired interface using the Streaming tab in Motive's Settings. The interface can be either over a local area network (LAN) or on the same machine (localhost, local loopback). If both server (Motive) and client application are running on the same machine, set the network interface to the local loopback address (127.0.0.1). When streaming over a LAN, select the IP address of the network adapter connected to the LAN. This will be the same address the Client application will use to connect to Motive.
Firewall or anti-virus software can block network traffic, so it is important to make sure these applications are disabled or configured to allow access to both server (Motive) and Client applications.
Streamed Data Types
Before starting to broadcast data onto the selected network interface, define which data types to stream. Under streaming options, there are settings where you can include or exclude specific data types and syntax. Set only the necessary criteria to true. For most applications, the default settings will be appropriate.
See: Application Settings: Streaming
Unicast Subscription
New in Motive 3.0.
Starting from Motive version 3.0, unicast NatNet clients have the ability to subscribe only to desired data types that are being streamed out. This feature helps to minimize the size of the data packets and helps to reduce the streaming latency. This is especially beneficial for wireless unicast clients where streaming is more vulnerable to packet loss.
For more information on data subscription, please read the following page: NatNet: Unicast Data Subscription Commands
When streaming Skeleton data, bone naming convention formats annotations for each segment when data is streamed out. Appropriate convention should be configured to allow client application to properly recognize segments. For example, when streaming to Autodesk pipelines, the naming convention should be set to FBX.
Motive (1.7+) uses a right-handed Y-up coordinate system. However, coordinate systems used in client applications may not always agree with the convention used in Motive. In this case, the coordinate system in streamed data needs to be modified to a compatible convention. For client applications with a different ground plane definition, Up Axis can be changed under Advanced Network Settings. For compatibility with left-handed coordinate systems, the simplest method is to rotate the capture volume 180 degrees on the Y axis when defining the ground plane during Calibration.
NatNet is a client/server networking protocol which allows sending and receiving data across a network in real-time. It utilizes UDP along with either Unicast or Multicast communication for integrating and streaming reconstructed 3D data, Rigid Body data, and Skeleton data from OptiTrack systems to client applications. Within the API, a class for communicating with OptiTrack server applications is included for building client protocols. Using the tools provided in the NatNet API, capture data can be used in various application platforms. Please refer to the NatNet User Guide For more information on using NatNet and its API references.
Rotation conventions
NatNet streams rotational data in quaternions. If you wish to present the rotational data in the Euler convention (pitch-yaw-roll), the quaternions data need to be converted into Euler angles. In the provided NatNet SDK samples, the SampleClient3D application converts quaternion rotations into Euler rotations to display in the application interface. The sample algorithms for the conversion are scripted in the NATUtils.cpp file. Refer to the NATUtils.cpp file and the SampleClient3D.cpp file to find out how to convert quaternions into Euler conventions.
If desired, recording in Motive can control or be controlled by other remote applications via sending or receiving either NatNet commands or XML broadcast messages to or from a client application through the UDP communication protocol. This enables client applications to trigger Motive or vise versa. Using NatNet commands is recommended because they are not only more robust but they also offer additional control features.
Recording start and stop commands can also be transmitted via XML packets. When triggering via XML messages, the Remote Trigger setting under Advanced Network Settings must be set to true. In order for Motive, or clients, to receive the packets, the XML messages must be sent via the triggering UDP port. The triggering port is designated as two increments (2+) of the defined Command Port (default: 1510), under the advanced network settings, which defaults to 1512. Lastly, the XML messages must exactly follow the appropriate syntax:
XML Triggering Port: Command Port (Advanced Network Settings) + 2. This defaults to 1512 (1510 + 2).Tip: Within the NatNet SDK sample package, there is are simple applications (BroadcastSample.cpp (C++) and NatCap (C#)) that demonstrates a sample use of XML remote trigger in Motive.
XML syntax for the start / stop trigger packet
Capture Start Packet
Name
Name of the Take that will be recorded.
SessionName
Name of the session folder.
Notes
Informational note for describing the recorded Take.
Description
(Reserved)
Assets
DatabasePath
The file directory where the recorded captures will be saved.
Start Timecode
PacketID
(Reserved)
HostName
(Reserved)
ProcessID
(Reserved)
Capture Stop Packet
Name
Name of the recorded Take.
Notes
Informational notes for describing recorded a Take.
Assets
Timecode
HostName
(Reserved)
ProcessID
(Reserved)
Runs local or over network. The NatNet SDK includes multiple sample applications for C/C++, OpenGL, WinForms/.NET/C#, MATLAB, and Unity. It also includes a C/C++ sample showing how to decode Motive UDP packets directly without the use of client libraries (for cross platform clients such as Linux). For more information regarding NatNet SDK visit our wiki page NatNet SDK 4.0.
C/C++ or VB/C#/.NET or MATLAB
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs local or over network. Allows streaming both recorded data and real-time capture data for markers, Rigid Bodies, and Skeletons.
Comes with Motion Builder Resources: OptiTrack Optical Device OptiTrack Skeleton Device OptiTrack Insight VCS
Markers: Y Rigid Bodies: Y Skeletons: Y
Streams capture data into Autodesk Maya for using the Virtual Camera System.
Requirements:
Requires Motive 1.0+
Requires a license valid through March 2, 2018 (check your status)
Works with Maya 2011 (x86 and x64), 2014, 2015, 2016, 2017 and 2018
Markers: Y Rigid Bodies: Y Skeletons: Y
With a Visual3D license, you can download Visual3D server application which is used to connect OptiTrack server to Visual3D application. Using the plugin, Visual 3D receives streamed marker data to solve precise Skeleton models for biomechanics applications.
Markers: Y Rigid Bodies: N Skeletons: N C-Motion wiki: Visual3DServer Plugin
Runs local or over network. Supports Unreal Engine versions up to 5. This plugin allows streaming of Rigid Bodies, markers, Skeletons, and integration of HMD tracking within Unreal Engine projects. For more details, read through the OptiTrack Unreal Engine Plugin documentation page.
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs local or over network. This plugin allows streaming of tracking data and integration of HMD tracking within Unity projects. For more details, read through the OptiTrack Unity Plugin documentation page.
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs Motive heedlessly. Best Motive command/control. Also provides access to camera imagery and other data elements not available in the other streams.
C/C++
Markers: Y Rigid Bodies: Y Skeletons: N
Within Motive
Runs local or over network.
Includes source code (C++) of a sample implementation for VRPN streaming. The Virtual-Reality Peripheral Network (VRPN) is an open source project containing a library and a set of servers that are designed for implementing a network interface between application programs and tracking devices used in a virtual-reality system.
Motive 3.0 uses VRPN version 7.33.1.
For more information: VRPN Github
Within Motive
In Motive, the Application Settings can be accessed under the View tab or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main Toolbar.
Sets the separator (_) and string format specifiers (%03d) for the suffix added after existing file names.
Enable/Disable auto-archiving of Takes when trimming Takes.
When enabled, all of the session folders loaded in the Data pane will be persisted when exiting and launching Motive again next time.
_[Advanced]_Sets the default device profile, XML format, to load onto Motive. The device profile determines and configures the settings for peripheral devices such as force plates, NI-DAQ, or navigation controllers.
[Advanced] Enter IP address of glove. Leave blank to use Local Host IP.
Enable or disable the LED panel in front of cameras that displays assigned camera numbers.
Sets how Camera IDs are assigned for each camera in a setup. Available options are By Location and By Serial Number. When assigning by location, camera IDs will be given following the positional order in clockwise direction, starting from the -X and -Z quadrant in respect to the origin.
Controls the color of the RGB Status Indicator Ring LEDs (Prime Series cameras only). Options include distinct indications for Live, Recording, Playback, Selection and Scene camera statuses, and you can choose the color for the corresponding camera status.
(Default: Blue) Sets the indicator ring color for cameras in Live mode.
(Default: Green) Sets the indicator ring color for cameras when recording a capture.
(Default: Black) Sets the indicator ring color for cameras when Motive is in playback mode.
(Default: Yellow) Sets the indicator ring color for cameras that are selected in Motive.
(Default: Orange) Sets the indicator ring color for cameras that are set as the reference camera in Motive.
(Default: Enabled) Controls whether the hibernation light turns on for the cameras when Motive is closed.
Configures the Aim Assist button. Sets whether the button will switch the camera to MJPEG mode and back to the default camera group record mode. Valid options are: True (default) and False.
(Default: Grayscale Only) Sets whether the camera button will display the aiming crosshairs on the the camera. Options: None, Grayscale Only, All Modes.
Enables or disables LED illumination on the Aim Assist button behind Prime Series cameras.
Sets the default device profile, XML format, to load onto Motive. The device profile determines and configures the settings for peripheral devices such as force plates, NI-DAQ, or navigation controllers.
Enter IP address of glove. Leave blank to use Local Host IP.
Shows which log file is loaded into Motive and displayed in the log pane. There is also a folder icon in order to change or add a log file.
Enable or disable continuous calibration for bumped cameras. When enabled, Motive will continuously monitor the calibration and update it as necessary. When this is set to true, calibration updates more invasively to accommodate for camera position/orientation changes. In general, if camera is significantly moved or displaced, it's suggested to calibrate the system again. For more information, refer to the Continuous Calibration page.
Restrict camera translation during continuous calibration.
Automatically loads the previous, or last saved, calibration setting when starting Motive.
The time duration, in seconds, that the camera system will auto-detect the existing extraneous reflections in order to apply masks during Calibration process.
Number of samples suggested for calibration. Depending on this setting, the sample count feedback will be colored differently during the Calibration process.
During the calibration wanding process, informative visuals are drawn over the camera view to show successfully collected wand samples and also to mark any extraneous reflections that appear. This is enabled by default. Disabling this will hide those calibration visuals.
When enabled, you can edit the camera calibration position with the 3D Gizmo tool.
Max distance cameras are translated by the position correction tool in mm.
Enables detection of PoE+ switches by High Power cameras (Prime 17W, PrimeX 22, Prime 41, and PrimeX41). LLDP allows the cameras to communicate directly with the switch and determine power availability to increase output to the IR LED rings. When using Ethernet switches that are not PoE+ Enabled or switches that are not LLDP enabled, cameras will not go into the high power mode even with this setting enabled.
This page provides an explanation on some of the settings that affect how the 3D tracking data is obtained. Most of the related settings can be found under the Live Pipeline tab in the Application settings. A basic understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. With that being said, we do not recommend changing these settings as the default settings should work well for most tracking applications.
Reconstruction is a process of deriving 3D points from 2D coordinates obtained by captured camera images. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated on each captured frame and processed through the solver pipeline in order to be tracked. This process involves trajectorization of detected 3D markers within the calibrated capture volume and the booting process for the tracking of defined assets.
For real-time tracking in Live mode, the settings for this pipeline can be configured from the Live-Pipeline tab in the Application Settings. For post-processing recorded files in Edit mode, the solver settings can be accessed under corresponding Take properties. Note that optimal configurations may vary depending on capture applications and environmental conditions, but for most common applications, default settings should work well.
In this page, we will focus on the Live Pipeline settings and the Camera Settings, which are the key settings that have direct effects on the reconstruction outcome.
Camera settings can be configured under the Devices pane. In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. For this reason, the camera lens must be focused on the tracking volume, and the settings should be configured so that the markers are clearly visible in each camera view. Thus, the camera settings, such as camera exposure and IR intensity values, must always be checked and optimized in each setup. The following sections highlight additional settings that are directly related to 3D reconstruction.
Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See Camera Video Types page for more information.
To oscillate between camera video types in Motive, click the camera video type icon under Mode in the Devices pane.
The THR setting is located in the camera properties in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.
We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data.
To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the Application Settings.
The Live Pipeline settings under application settings control the tracking quality in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main stages before getting reconstructed into 3D tracking. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.
When a frame of image is captured by a camera, the 2D camera filter is applied. This filter works by judging on the sizes and shapes of the detected reflections or IR illuminations, and it determines which ones can be accepted as markers. Please note that the camera filter settings can be configured in Live mode only because this filter is applied at the hardware level when the 2D frames are first captured. Thus, you will not be able to modify these settings on a recorded Take as the 2D data has already been filtered and saved; however, when needed, you can increase the threshold on the filtered 2D data and perform post-processing reconstruction to recalculate 3D data from the 2D data.
Min/Max Thresholded Pixels
The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.
For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.
Circularity
In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the Marker Circularity settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.
Object mode vs. Precision Mode
The Object Mode and Precision Mode deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In precision mode, cameras send the pixel data that would have been used by object mode to Motive for processing. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about Video Types.
After the 2D camera filter has been applied, each of the 2D centroids captured by each camera forms a marker ray, which is basically a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume; from each calibrated camera. When a minimum required number of rays, as defined in the Minimum Rays) converge and intersect within the allowable maximum offset distance (defined by 3D Threshold settings) trajectorization of a 3D marker occurs. Trajectorization is a process of using 2D data to calculate respective 3D marker trajectories in Motive.
Tracked Ray (Green)
Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.
Untracked Ray (Red)
An untracked ray is a marker ray that fails to contribute to a reconstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.
Motive processes markers rays with the camera calibration to reconstruct respective markers, and the solver settings determine how 2D data gets trajectories and solved into 3D data for tracking the Rigid Bodies and/or Skeletons. The solver not only tracks from the marker rays but additionally utilizes pre-defined asset definitions to provide high-quality tracking. The default solver settings work for most tracking applications, and the users should not need to modify these settings. With that being said, some of the basic settings which can be modified are summarized below.
Minimum Rays to Start / Minimum Rays to Continue
This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.
More Settings
The Live Pipeline settings doesn't have to be modified for most tracking applications. There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the Application Settings: Live Pipeline page or refer to the corresponding tooltips.
Motive performs real-time reconstruction of 3D coordinates directly from either captured or recorded 2D data. When Motive is live-processing the data, you can examine the marker rays from the viewport, inspect the Live-Pipeline settings, and optimize the 3D data acquisition.
There are two modes where Motive is reconstructing 3D data in real-time:
Live mode (Live 2D data capture)
2D mode (Recorded 2D data)
In the Live Mode, Motive is Live processing the data from captured 2D frames to obtain 3D tracking data in real-time, and you can inspect and monitor the marker rays from the 3D viewport. Any changes to the Live Pipeline (Solver/Camera) settings under the Application Settings will be reflected immediately in the Live mode.
The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When a capture is recorded in Motive, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the 3D data gets loaded first when a recorded Take file is opened.
Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once recording has been made. You can still, however, view and use the recorded 2D data to optimize the solver parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the Data pane.
In 2D Mode, Motive is reconstructing in real-time from recorded 2D data; using the reconstruction/solver settings that were configured in the Application Settings at the time of recording; Settings are saved under the properties of the corresponding TAK file. Please note that reconstruction/solver settings from the TAK properties get applied for post-processing, instead of the settings from the application settings panel. When in 2D Mode while editing a TAK file, any changes to the reconstruction/solver settings under TAK properties will be reflected in how the 3D reconstructions are solved, in real-time.
Switching to 2D Mode
Applying changes to 3D data
Once the reconstruction/solver settings have been adjusted and optimized on recorded data, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.
The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.
Performing post-processing reconstruction. To perform post-processing reconstruction, open the Data pane, select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.
Camera Filter Settings In Edit mode, 2D camera filters can still be modified from the tracking group properties in the Devices pane. Modified filter settings will change which markers in the recorded 2D data gets processed through the Live Pipeline engine.
Solver/Reconstruction Settings When you perform post-processing reconstruction on a recorded Take(s), a new set of 3D data will be reconstructed from the filtered 2D camera data. In this step, the solver settings defined under corresponding Take properties in the Properties pane will be used. Note that the reconstruction properties under the Application Settings are for the Live capture systems only.
Reconstruct and Auto-label, will additionally apply the auto-labeling pipeline on the obtained 3D data and label any markers that associate with existing asset (Rigid Body or Skeleton) definitions. The auto-labeling pipeline will be explained more on the Labeling page.
Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the Control Deck or in the Graph pane. When nothing is selected, reconstruction will be applied to all frames.
Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the Data pane.
Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.
Also, for Takes involving Skeleton assets, if the Skeletons are never in well-trackable poses throughout the captured Take, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
The Application Settings panel can be opened under the View tab or by clicking icon on the main toolbar in Motive. Most of the settings that are related to the overall software and the system can be accessed and configured in this panel. This includes camera system setting, data pipeline settings, streaming settings, and hotkeys and shortcuts.
Changes to the Application Settings can be resetted by Reset Settings under the Edit Tools tab from the main Toolbar.
Advanced Settings
The Application Settings contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
The Assets tab in the application settings panel is where you can configure the creation properties for Rigid Body and Skeleton assets. In other words, all of the settings configured in this tab will be assigned to the Rigid Body and Skeleton that are newly created in Motive.
You can change the naming convention of Rigid Bodies when they are first created. For instance, if it is set to RigidBody, the first Rigid Body will be named RigidBody when first created. Any subsequent Rigid Bodies will be named RigidBody 001, RigidBody 002, and so on.
User definable ID. When streaming tracking data, this ID can be used as a reference to specific Rigid Body assets.
The minimum number of markers that must be labeled in order for the respective asset to be booted.
The minimum number of markers that must be labeled in order for the respective asset to be tracked.
Applies double exponential smoothing to translation and rotation. Disabled at 0.
Compensate for system latency by predicting movement into the future.
For this feature to work best, smoothing needs to be applied as well.
Toggle 'On' to enable. Displays asset's name over the corresponding skeleton in the 3D viewport.
Select the default color a Rigid Body will have upon creation. Select 'Rainbow' to cycle through a different color each time a new Rigid Body is created.
When enabled this shows a visual trail behind a Rigid Body's pivot point. You can change the History Length, which will determine how long the trail persists before retracting.
Shows a Rigid Body's visual overlay. This is by default Enabled. If disabled, the Rigid Body will only appear as individual markers with the Rigid Body's color and pivot marker.
When enabled for Rigid Bodies, this will display the Rigid Body's pivot point.
Shows the transparent sphere that represents where an asset first searches for markers, i.e. the Marker Constraints.
When enabled and a valid geometric model is loaded, the model will draw instead of the Rigid Body.
Allows the asset to deform more or less to accommodate markers that don't fix the model. High values will allow assets to fit onto markers that don't match the model as well.
Creates the Skeleton with arms straight even when arm markers are not straight.
Creates the Skeleton with straight knee joints even when leg markers are not straight.
Creates the Skeleton with feet planted on the ground level.
Creates the Skeleton with heads upright irrespective of head marker locations.
Force the solver so that the height of the created Skeleton aligns with the top head marker.
Height offset applied to hands to account for markers placed above the write and knuckle joints.
Same as the Rigid Body visuals above:
Label
Creation Color
Bones
Marker Constraints
Changes the color of the skeleton visual to red when there are no markers contributing to a joint.
Display Coordinate axes of each joint.
Displays the lines between labeled skeleton markers and corresponding expected marker locations.
Displays lines between skeleton markers and their joint locations.
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
The Mouse tab under the application settings is where you can check and customize the mouse actions to navigate and control in Motive.
The following table shows the most basic mouse actions:
You can also pick a preset mouse action profiles to use. The presets can be accessed from the below drop-down menu. You can choose from the provided presets, or save out your current configuration into a new profile to use it later.
Configured hotkeys can be saved into preset profiles to be used on a different computer or to be imported later when needed. Hotkey presets can be imported or loaded from the drop-down menu:
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
In Motive, the Data Streaming pane can be accessed under the or by clicking icon on the main toolbar. For explanations on the streaming workflow, read through the page.
This section allows you to stream tracking data via Motive's free streaming plugins or any custom-built NatNet interfaces. To begin streaming, select Broadcast Frame Data. Select which types of data (e.g. markers, Rigid Bodies, or Skeletons) will be streamed, noting that some third party applications will only accept one type of data. Before you begin streaming, ensure that the network type and interface are consistent with the network you will be streaming over and the settings in the client application.
(Default: False) Enables/disables broadcasting, or live-streaming, of the frame data. This must be set to true in order to start the streaming.
(Default: loopback) Sets the network address which the captured frame data is streamed to. When set to local loopback (127.0.0.1) address, the data is streamed locally within the computer. When set to a specific network IP address under the dropdown menu, the data is streamed over the network and other computers that are on the same network can receive the data.
(Default: Multicast) Selects the mode of broadcast for NatNet. Valid options are: Multicast, Unicast.
(Default: True) Enables, or disables, streaming of labeled Marker data. These markers are point cloud solved markers.
(Default: True) Enables/disables streaming of all of the unlabeled Marker data in the frame.
(Default: True) Enables/disables streaming of the Marker Set markers, which are named collections of all of the labeled markers and their positions (X, Y, Z). In other words, this includes markers that are associated with any of the assets (Marker Set, Rigid Body, Skeleton). The streamed list also contains a special marker set named all which is a list of labeled markers in all of the assets in a_Take_. In this data, Skeleton and Rigid Body markers are point cloud solved and model-filled on occluded frames.
(Default: Skeletons) Enables/disables streaming of Skeleton tracking data from active Skeleton assets. This includes the total number of bones and their positions and orientations in respect to global, or local, coordinate system.
When enabled, this streams active peripheral devices (ie. force plates, Delsys Trigno EMG devices, etc.)
(Default: Global) When set to Global, the tracking data will be represented according to the global coordinate system. When this is set to Local, the streamed tracking data (position and rotation) of each skeletal bone will be relative to its parent bones.
(Default: Motive) Sets the bone naming convention of the streamed data. Available conventions include Motive, FBX, and BVH. The naming convention must match the format used in the streaming destination.
(Default: Y Axis) Selects the upward axis of the right-hand coordinate system in the streamed data. When streaming onto an external platform with a Z-up right-handed coordinate system (e.g. biomechanics applications) change this to Z Up.
(Default: False) When set to true, Skeleton assets are streamed as a series of Rigid Bodies that represent respective Skeleton segments.
(Default: True) When set to true, associated asset name is added as a subject prefix to each marker label in the streamed data.
Enables streaming to Visual3D. Normal streaming configurations may be not compatible with Visual3D, and this feature must be enabled for streaming tracking data to Visual3D.
Applies scaling to all of the streamed position data.
(Default: 1510) Specifies the port to be used for negotiating the connection between the NatNet server and client.
(Default: 1511) Specifies the port to be used for streaming data from the NatNet server to the client(s).
Specifies the multicast broadcast address. (Default: 239.255.42.99). Note: When streaming to clients based on NatNet 2.0 or below, the default multicast address should be changed to 224.0.0.1 and the data port should be changed to 1001.
Warning: This mode is for testing purposes only and it can overflood the network with the streamed data.
When enabled, Motive streams out the mocap data via broadcasting instead of sending to Unicast or Multicast IP addresses. This should be used only when the use of Multicast or Unicast is not applicable. This will basically spam the network that Motive is streaming to with streamed mocap data which may interfere with other data on the network, so a dedicated NatNet streaming network may need to be set up between the server and the client(s).To use the broadcast set the streaming option to Multicast and have this setting enabled on the server. Once it starts streaming, set the NatNet client to connect as Multicast, and then set the multicast address to 255.255.255.255. Once Motive starts broadcasting the data, the client will receive broadcast packets from the server.
Warning: Do not modify unless instructed.
(Default: 1000000)
This controls the socket size while streaming via Unicast. This property can be used to make extremely large data rates work properly.
(Default: False) When enabled, Motive streams Rigid Body data via the VRPN protocol.
[Advanced] (Default: 3883) Specifies the broadcast port for VRPN streaming. (Default: 3883).
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
If you have an audio input device, you can record synchronized audio along with motion capture data in Motive. Recorded audio files can be played back from a captured Take or be exported into a WAV audio files. This page details how to record and playback audio in Motive. Before using an audio input device (microphone) in Motive, first make sure that the device is properly connected and configured in Windows.
In Motive, audio recording and playback settings can be accessed from .
In Motive, open the Audio Settings, and check the box next to Enable Capture.
Select the audio input device that you want to use.
Press the Test button to confirm that the input device is properly working.
Make sure the device format of the recording device matches the device format that will be used in the playback devices (speakers and headsets).
Capture the Take.
Enable the Audio device before loading the TAK file with audio recordings. Enabling after is currently not supported, as the audio engine gets initialized on TAK load
Open a Take that includes audio recordings.
To playback recorded audio from a Take, check the box next to Enable Playback.
Select the audio output device that you will be using.
Make sure the configurations in Device Format closely match the Take Format. This is elaborated further in the section below.
Play the Take.
In order to playback audio recordings in Motive, audio format of recorded sounds MUST match closely with the audio format used in the output device. Specifically, communication channels and frequency of the audio must match. Otherwise, recorded sound will not be played back.
The recorded audio format is determined by the format of a recording device that was used when capturing Takes. However, audio formats in the input and output devices may not always agree. In this case, you will need to adjust the input device properties to match them. Device's audio format can be configured under the Sound settings in Windows. In Sound settings (accessed from Control Panel), select the recording device, click on Properties, and the default format can be changed under the Advanced Tab, as shown in the image below.
If you want to use an external audio input system to record synchronized audio, you will need to connect the motion capture system into a Genlock signal or a Timecode device. This will allow you to precisely synchronize the recorded audio along with the capture data.
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
Live-Pipeline settings contain camera filter settings and solver settings for obtaining 3D data in Motive. Please note that these settings are optimized by default and should provide high-quality tracking for most applications. The settings that might need to be adjusted based on the application are visible by default (i.e. not advanced).
The most commonly changed settings are...
Coarse/Fine IK Iterations - This helps Skeletons converge to a good pose quickly when Skeletons start in a difficult to track pose.
Minimum Rays to Start/Continue - This helps reduce false markers from semi-reflective objects when there is a lot of camera overlap. It also allows you to not track when seen by only one camera (Minimum Rays to Continue = 2).
Boot Skeleton Label Percentage - A lower value will allow Skeletons to boot more quickly when entering the volume. A higher value will prevent untracked Skeletons from attempting to track using other markers in the volume.
Solver settings for recorded captures:
Please note that these settings are applied only to the Live 3D data. For captures that are already recorded, you can optimize them from the of the corresponding TAK file.
The solver settings control how each marker trajectory gets reconstructed into the 3D space and how Rigid Bodies and Skeletons track. The solver is designed to work for most applications without needing to modify any settings. However, in some instances changing some settings will lead to better tracking results. The settings that may need to be changed are visible by default. There are also a large number of advanced settings that we don’t recommend changing, but the tooltips are available if needed. The settings that users may need to change are listed below with descriptions.
These are general tracking settings for the solver not related to creating 3D markers or booting assets. Do not change these settings in Live mode as incorrect settings can negatively affect the tracking, this is mostly useful when optimizing 3D data for recorded captures with actors in difficult positions to track.
What it does: This property sets the number of Coarse IK iterations, which are fast but not accurate inverse kinematic solve to place the Skeleton on the associated markers.
When to change: Do not change this property in Live mode. In recorded captures, this property may need to be changed, under the TAK properties, if the recording(s) starts with actors who are not in standing-up positions. Sometimes in these recordings, the Skeletons may not solve on the first couple frames, and in these cases, increasing this setting will allow the Skeleton to converge on the first frame.
What it does: This property sets the number of Fine IK iterations, which are slow but accurate inverse kinematic solve to place the final pose of the Skeleton on the associated markers. Increasing this setting may result in higher CPU usage.
When to change: Do not change this property in Live mode. In recorded captures, this property may need to be changed, under the TAK properties, if the recording(s) starts with actors who are not in standing-up positions or the ones that are difficult to solve. Sometimes in these recordings, the Skeletons may not solve on the first couple frames, and in these cases, increasing this setting will allow the Skeleton to converge on the first frame.
The Trajectorizer settings control how the 2D marker data is converted into 3D points in the calibrated volume. The trajectorizer performs reconstruction of 2D data into 3D data, and these settings control how markers are created in the 3D scene over time.
What it does: This setting controls the maximum distance between a marker trajectory and its predicted position.
When to change: This setting may need to be increased when tracking extra fast assets. The default setting should track most applications. Attempt to track with default settings first, and if there are any gaps in the marker trajectories, you can incrementally increase the distance until stable tracking is achieved.
What it does: This setting controls the maximum distance between a ray and the marker origin.
When to change: For large volumes with high camera counts, increasing this value may provide more accurate and robust tracking. The default value of 3 works well with most medium and small-sized volumes. For volumes that only have two cameras, the trajectorizer will use a value of 2 even when it's not explicitly set.
What it does: This sets the minimum number of rays that need to converge on one location in order to continue tracking a marker that already initialized near that location. A value of 1 will use asset definitions to continue tracking markers even when a 3D marker could not have been created from the camera data without the additional asset information.
When to change: This is set to 1 by default. It means that Motive will continue the 3D data trajectory as long as at least one ray is obtained and the asset definition matches. When single ray tracking is not desired or for volumes with a large number of cameras, change this value to 2 to utilize camera overlaps in the volume.
What it does: This setting is used for tracking active markers only, and it sets the number of frames of motion capture data used to uniquely identify the ID value of an active marker.
When to change: When using a large number of active tags or active pucks, this setting may need to be increased. It's recommended to use the active batch programmer when configuring multiple active components, and when each batch of active devices has been programmed, the programmer will provide a minimum active pattern depth value that should be used in Motive.
What it does: The total number of rays that must contribute to an active marker before it is considered active and given an ID value.
When to change: Change this setting to increase the confidence in the accuracy of active marker ID values (not changed very often).
What it does: The number of frames of data that the solver will attempt to fill if a marker goes missing for some reason. This value must be at least 1 if you are using active markers.
When to change: If you would like more or fewer frames to be filled when there are small gaps.
The Booter settings control when the assets start tracking, or boot, on the trajectorized 3D markers in the scene. In other words, these settings determine when Rigid Bodies and/or Skeletons track on a set of markers.
What it does: This controls the maximum distance between a pair of Marker Constraints to be considered as an edge in the label graph.
When to change: The default settings should work for most applications. This value may need to be increased to track large assets with markers that are far apart.
When to change: The default settings should work for most applications. Set this value to about 75% to help keep Skeletons from booting on other markers in the volume if there are similar Skeleton definitions or lots of loose markers in the scene. If you would like Skeletons to boot faster when entering the volume, then you can set this value lower.
Controls the deceleration of the asset joint angles in the absence of other evidence. For example, a setting of 60% will reduce the velocity by 99% in 8 frames; whereas 80% will take 21 frames to do the same velocity reduction.
The residual is the distance between a Marker Constraint and its assigned trajectory. If the residual exceeds this threshold, then that assignment will be broken. A larger value helps catch rapid acceleration of limbs, for example.
Ignores reconstructed 3D points outside of the reconstruction bounds.
This will be the general shape of the reconstruction bounds. Can choose from the following:
Cuboid
Cylinder
Spherical
Ellipsoid
The rest of the settings found under this tab can be modified in relation to center, width, radius, and height.
Two marker trajectories discovered within this distance are merged into a single trajectory.
A marker trajectory is predicted on a new frame and then projected in all the cameras. to be assigned to a marker detection in a particular camera, the distance (in pixels) must not exceed this threshold.
The maximum number of pixels between a camera detection and the projection of its marker.
The new marker trajectory is generated at the intersection of two rays through detections in different cameras. Each detection must be the only candidate within this many pixels of the projection of the other ray.
Marker trajectories are predicted on the next frame to have moved with this percentage of their velocity on the previous frame.
When a Skeleton marker trajectory is not seen, its predicted position reverts towards its assigned Marker Constraints by this percentage.
When a Rigid Body marker trajectory is not seen, its predicted position reverts towards its assigned Marker Constraints by this percentage.
The penalty for leaving Marker Constraints unassigned (per label graph edge).
The maximum average distance between the marker trajectory and the Marker Constraints before the asset is rebooted.
This value controls how willing an asset is to boot onto markers. A higher value will make assets boot faster when entering the volume. A lower value will stop assets from booting onto other markers when they leave the volume.
This is a less accurate but fast IK solve meant to get the skeleton roughly near to the final pose while booting.
This is a more accurate but slow IK solve meant to get the skeleton to the final pose while booting. (High values will slow down complex takes.)
The maximum number of assets to boot per frame.
This section of the application settings is used for configuring the 2D filter properties for all of the cameras.
The minimum pixel size of a 2D object, a collection of pixels grouped together, for it to be included in the Point Cloud reconstruction. All pixels must first meet the brightness threshold defined in the Cameras pane in order to be grouped as a 2D object. This can be used to filter out small reflections that are flickering in the view. The default value for the minimum pixel size is 4, which means that there must be 4 or more pixels in a group for a ray to be generated.
This setting sets the threshold of the circularity filter. Valid range is between 0 and 1; with 1 being a perfectly round reflection and 0 being flat. Using this 2D object filter, the software can identify marker reflections using the shape, specifically the roundness, of the group of thresholded pixels. Higher circularity setting will filter out all other reflections that are not circular. It is recommended to optimize this setting so that extraneous reflections are efficiently filtered out while not filtering out the marker reflections.
When using lower resolution cameras to capture smaller markers at a long distance, the marker reflection may appear to be more pixelated and non-circular. In this case, you may need to lower the circularity filter value for the reflection to be considered as a 2D object from the camera view. Also, this setting may need to be lowered when tracking non-spherical markers in order to avoid filtering the reflections.
Changes the padding around masks by pixels.
Delay this group from sync pulse by this amount.
Controls how the synchronizer operates. Options include:
Force Timely Delivery
Favor Timely Delivery
Force Complete Delivery
Choose the filter type. Options include:
Size and Roundness
None
Minimum Pixel Threshold
The minimum allowable size of the 2D object (pixels over threshold).
The maximum allowable size of the 2D object (pixels over threshold).
The size of the guard region beyond the object margin for neighbor detection.
The pixel intensity of the grayscale floor (pixel intensity).
The minimum space (in pixels) between objects before they begin to overlap.
The number of pixels a 2D object is allowed to lean.
The maximum allowable aspect tolerance to process a 2D object (width:height).
The allowable aspect tolerance for very small objects.
The rate at which the aspect tolerance relaxes as object size increases.
In Motive, the Application Settings can be accessed under the or by clicking icon on the main toolbar. Default Application Settings can be recovered by Reset Application Settings under the Edit Tools tab from the main .
2D tab under the view settings lists out display settings for the in Motive.
Sets the background color of the .
Enables markers selected from the 3D Perspective View to be also highlighted with yellow crosshairs in the 2D camera view, based on calculated position. Crosshairs that are not directly over the marker tend to indicate occlusion or poor camera calibration.
When enabled, Camera View shows which markers have been filtered out by the camera's circularity and size filter. This is enabled by default and is useful for inspecting why certain cameras are not tracking a specific markers in the view.
Sets the background color of the Perspective View.
Turns a gradient “fog” effect on in the Perspective View.
Selects the color of the ground plane grid in the Perspective View.
Selects the size of the ground plane grid in the Perspective View. Specifically, it sets the number of grids along the positive and negative direction in both the X and Z axis. Each grid represents 20cm x 20cm in size within a calibrated volume.
When enabled, Motive will display the floor plane in the Perspective View. This is disabled by default to only show the floor grid.
Sets the color of selections in the 3D view port.
Displays the coordinate axis in the 3D view port.
Determines where timecode gets displayed in Motive. Timecode can be displayed either on the Perspective View or the Control Deck or hidden entirely. Timecode will be available only when the timecode signal is inputted through the eSync.
Show or hide marker count report located at the bottom-right corner of the Perspective View.
Overlays the OptiTrack logo over top of the Perspective View.
Overlays refresh rate of the display on the Perspective View.
Determines whether marker sizes in the 3D Perspective View are represented by the calculated size or overwritten with a set diameter.
When the Marker Size setting above is set to Custom, the diameter of the 3D markers will all be fixed to the inputted diameter.
Sets the color for passive markers in the 3D viewport. Retro-reflective markers or continuously illuminating IR LEDs will be recognized as passive markers in Motive.
When this is set to true. 3D positions and estimated diameter of selected markers will be displayed on the 3D viewport.
Displays a history trail of marker positions over time.
When both the marker history and this setting is enabled, marker history trail will be shown for only selected markers in the viewport.
Number of past frames for showing the marker history.
Sets the color of reference cameras in the 3D Perspective View. Cameras that are capturing reference MJPEG grayscale videos or color videos, for Prime Colors, will be considered as reference cameras.
Sets the color for Tracked Rays in the 3D Perspective View.
Sets the color for unlabeled rays in the 3D Perspective View.
Sets the color for untracked rays in the 3D Perspective View.
Sets the color used for visualizing the capture volume.
Minimum number of cameras required for their FOV to overlap when visualizing the capture volume.
Sets the color for labeled markers. Markers that are labeled using either Rigid Body or Skeleton solve will be colored according to their asset properties.
Shows rays stemming from camera to markers that have not been labeled.
Displays all tracked rays.
Background color used for the plots.
The scope of domain range, in frames, used for plotting graphs.
The Assets pane in Motive lists out all of the assets involved in the Live, or recorded, capture and allows users to manage them. This pane can be accessed under the in Motive or by clicking icon on the main toolbar.
A list of all assets associated with the take is displayed in the Assets pane. Here, view the assets and you can right click on an asset to export, remove, or rename selected asset from the current take.
Exports selected Rigid Bodies into either a Motive file (.motive) or CSV. Exports selected Skeletons into either Motive file (.motive) or an FBX file.
Exports Skeleton marker template constraint XML file. The exported constraints files contain marker can be modified and imported again.
Imports Skeleton marker template constraint XML file onto the selected asset. If you wish to apply the imported XML for labeling, all of the Skeleton markers need to be unlabeled and auto-labeled again.
Imports the default Skeleton marker template constraint XML files. This basically colors the labeled markers and creates marker sticks that inter-connects between each of consecutive labels.
This is only possible when post-processing a recorded TAK. Solving an Asset bakes its 6 DoF data into the recording. Once the asset is solved, Motive plays back the recording from the recorded Solved data.
Exports FBX actor of the Skeleton.
Highlight, or select, the desired frame range in the Graph pane, and zoom into it using the zoom-to-fit hotkey (F) or the icon.
Set the working range from the Control Deck by inputting start and end frames on the field.
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Global: Defines the position and orientation in respect to the global coordinate system of the calibrated capture volume. The global coordinate system is the origin of the ground plane which was set with a calibration square during the process.
Local: Defines the bone segment position and orientation in respect to the coordinate system of the parent segment. Note that the hip of the skeleton is always the top-most parent of the segment hierarchy. Local coordinate axes can be set to visible from or in the . The Bone segment rotation values in the Local coordinate space can be used to roughly represent the joint angles, however, for precise analysis, joint angles should be computed through a biomechanical analysis software using the exported capture data (C3D).
Displays which data type is listed in each corresponding column. Data types include raw marker, Rigid Body, Rigid Body marker, bone, bone marker, or unlabeled marker. Read more about .
View/hide
View/hide
Enable/Disable
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Using the Labels pane, you can assign marker labels for each asset (Marker Set, Rigid Body, and Skeleton) via the QuickLabel Mode . The Labels pane also shows a list of labels involved in the Take and their corresponding percent completeness values. The percent completeness values indicate frame percentages of a Take for which the trajectory has been labeled. If the trajectory has no gaps (100% complete), no number will be shown. You can use this pane together with the Graph View pane to quickly locate gaps in a trajectory.
In the Perspective View pane. Assign the selected label to a marker. If the Increment Option () is set under the Labels pane, the label selection in the Labels pane will automatically advance each time you assign them.
Show/Hide Skeleton visibility under the visual aids options in the perspective view to have a better view on the markers when assigning marker labels.
Toggle Skeleton selectability under the selection option in the perspective view to use the Skeleton as a visual aid without it getting in the way of marker data.
Show/Hide Skeleton sticks and marker colors under the visual aids in the perspective view options for intuitive identification of labeled markers as you tag through Skeleton markers.
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Can only be exported when is recorded for exported Skeleton assets. Exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Can only be exported when is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Export Skeleton nulls. Please note that the must be recorded for Skeleton bone tracking data to be exported. It exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Can only be exported when is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
In Motive, all of the recorded capture files are managed through the Data pane. Each capture will be saved in a Take (TAK) file, which can be played back in the Edit mode later. Related Take files can be grouped within session folders. Simply create a new folder in the desired directory and load the folder onto the Data pane. Currently selected session folder is indicated with the flag symbol (), and all newly recorded Takes will be saved in this folder.
You can check and/or switch video types of a selected camera from either the camera properties, viewports. Also, you toggle the camera(s) between tracking mode and reference mode in the Device pane by clicking on the Mode button ( / ). If you want to use all of the cameras for tracking, make sure all of the cameras are in the Tracking mode.
Cameras can also be set to record reference videos during capture. When using MJPEG mode, these videos are synchronized with other captured frames, and they are used to observe what goes on during recorded capture. To record the reference video, switch the camera into a MJPEG mode by toggling on the camera mode in the Devices pane.
To quickly access the streaming settings, click on the streaming icon () from the control deck. This will open the streaming tab in the application settings panel.
List of involved in the Take.
Timecode values (SMTPE) for frame alignments, or reserving future record trigger events for timecode supported systems. Camera systems usually have higher framerates compared to the SMPTE Timecode. In the triggering packets, the always equal to 0 at the trigger.
List of involved in the Take
Timecode values (SMPTE) for frame alignments. The value is zero.
Enable Marker Size under the visual aids () in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the size filter.
Enable Marker Circularity under the visual aids in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the circularity filter.
Monitoring marker rays is an efficient way of inspecting reconstruction outcomes. The rays show up by default, but if not, they can be enabled for viewing under the visual aids options under the toolbar in 3D viewport. There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.
Under the Data pane, click to access the menu options and check the 2D Mode option.
A list of the default Rigid Body creation properties is listed under the Rigid Bodies tab. These properties are applied to only Rigid Bodies that are newly created after the properties have been modified. For descriptions of the Rigid Body properties, please read through the page.
Note that this is the default creation properties. Asset specific Rigid Body properties are modified directly from the .
A list of the default Skeleton display properties for newly created Skeletons is listed under the Skeletons tab. These properties are applied to only Skeleton assets that are newly created after the properties have been modified. For descriptions of the Skeleton properties, please read through the page.
Note that this is the default creation properties. Asset-specific Skeleton properties are modified directly from the .
The Keyboard tab under the application settings allows you to assign specific hotkey actions to make Motive easier to use. List of default key actions can be found in the following page also:
(Default: True) Enables/disables streaming of Rigid Body data, which includes the name of Rigid Body assets as well as positions and orientations of their .
(Default: False) Allows using the remote trigger for recording using XML commands. See more:
For information on streaming data via the VRPN Streaming Engine, please visit the . Note that only 6 DOF Rigid Body data can be streamed via VRPN.
Recorded audio files can be exported into WAV format. To export, right-click on a Take from the and select Export Audio option in the context menu.
For more information on synchronizing external devices, read through the page.
What it does: This sets the minimum number of that need to converge on one location to create a new marker in 3D. This is also the minimum number of calibrated cameras that see the same target marker within the 3D threshold value for them to initially get trajectorized into a 3D point.
What it does: This sets the percentage of Skeleton markers that need to be trajectorized in order to track the corresponding Skeleton(s). If needed, this setting can also be configured per each asset from the corresponding asset properties using the .
3D tab under the view settings lists out display settings for the in Motive.
Sets the color for in the 3D viewport.
Sets the color for measurement markers that are sampled using the .
Sets the color of tracking cameras in the 3D Perspective View. Cameras that are set to will be considered as tracking cameras in Motive.
Colors used for plot guidelines in the .
When enabled, y-axis of each plot will autoscale to fit all the data in the view. It will also zoom automatically for best visualization. For fixed y-plot ranges, this setting can be disabled. See for more information.
Preferred used for Live mode.
Preferred used for Edit mode.
You can also enable or disable assets by checking or unchecking, the box next to each asset. Only enabled assets will be visible in the 3D viewport and used by the to label the markers associated with respective assets.
In the Assets pane, the context menu for involved assets can be accessed by clicking on the context menu or by right-clicking on a selected Take(s). The context menu lists out available actions for the corresponding assets.
Re-calibrates an existing Skeleton. This feature is essentially same as re-creating a Skeleton using the same Skeleton Marker Set. See page for more information on using the Skeleton template XML files.
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle Selection in View
CTRL + left mouse click
This page provides instructions on how to use the Constraints pane in Motive.
The Constraints pane is intended to allow further optimization around the solver constraints which currently only includes asset model markers. It also allows users to change the name and color of markers for an asset. The asset that you are working with is linked to the selection in Motive unless the “Link to 3D Selection” toggle next to the asset name is turned off.
The default view of the Constraints pane shows the labels and color either of which can be modified to customize your asset. You can also enable three other columns that help control how the solver interacts with markers. The sections below the details for each of these columns.
Constraints (Names)
The “Constraint” column lists the labels of asset model markers associated with an asset. Labels can be modified in the Constraints pane by slow clicking or right-clicking (context menu). You can sort the “Constraint” column alphabetically ascending or descending. However, by default, this column sorts by the asset definition. This uses the internal asset definition to order the constraints and allows you to change this order using the context menu. Changing the order of the constraints will also change the order of the asset model marker names in the Labels pane. This can be helpful to define custom marker ordering for manual labeling.
Color
The color column allows you to change the color of a constraint. This allows you to assign custom colors to different markers associated with an asset. The constraint’s color property has a “rainbow” macro available. This allows you to link the color of the marker to the color defined by the asset.
The MemberID column is mostly just used to view unique ID values assigned to each constraint. It typically is just a reflection of the original ordering of the constraints.
The ActiveID column allows you to view and modify Active Marker ID values. Typically Active ID values are automatically assigned on asset creation or when adding a marker, but this gives you a higher level of insight and control in the process.
The Weight column allows you to tell the solver to prefer a marker when solving the asset data with less than an optimal amount of marker information. For example, the hands are weighted slightly higher for the baseline and core Marker Sets to help preference the end effectors. However, editing this property is not typically recommended.
You can also view and modify the constraints setting from the Properties pane. When you select a constraint from the list, the properties of the selected constraints will be listed under the Properties pane. This is just another way to interface with the same information, but in addition, you can also modify XYZ location of the asset model markers on a Rigid Body or a skeletal bone. Note that these position values are in respect to the local coordinate system of the corresponding Rigid Body or the bone.
Exporting constraints makes an XML file containing the names, colors, and marker stick definitions for manual editing. Importing reads the (.xml) files made when exporting. Generating constraints resets the asset back to the default state, if applicable.
The Calibration pane is used to calibrate the capture volume for accurate tracking. This pane is typically a default pane when first starting Motive. Otherwise, you can access this pane either via the command bar View > Calibration, or the icon. This page provides instructions and tips on how to efficiently use all the functionalities of the Calibration pane.
Calibration is essential for high quality optical motion capture systems. During calibration, the system computes position and orientation of each camera and amounts of distortions in captured images, and they are used constructs a 3D capture volume in Motive. This is done by observing 2D images from multiple synchronized cameras and associating the position of known calibration markers from each camera through triangulation.
Please note that if there is any change in a camera setup over the course of capture, the system must be recalibrated to accommodate for changes. Moreover, even if setups are not altered, calibration accuracy may naturally deteriorate over time due to ambient factors, such as more or less light entering the capture volume as the day progresses and fluctuation in temperature. Thus, for accurate results, it is recommended to periodically calibrate the system.
To learn more about Calibration outside of the pane functionality, please visit the Calibration page on this wiki.
To begin a new Calibration click New Calibration
If you already have a previous calibration you wish to load, click Load Calibration.
This will open the Calibrations folder.
From here you can choose a Calibration you wish to load.
Each time you create a new calibration in Motive, it will automatically save in the Calibrations folder.
If you wish to view calibrations in File Explorer, click the Open Calibration Folder. You cannot load a calibration from this window. The purpose of opening the Calibration folder is to manipulate the files separately from Motive. For instance, you may want to delete old Calibrations that are no longer relevant to your current camera setup.
This icon will open the Calibration page in the wiki for reference.
This link will direct you to the Camera Placement page in the wiki for reference.
This link will direct you to the Aiming and Focusing page in the wiki for reference.
Before performing system calibration, all extraneous reflections or unnecessary markers should ideally be removed or covered so that they are not seen by the cameras. If this is not possible, extraneous reflections can be ignored by applying masks over them in Motive.
Masks can be applied by clicking Mask in the calibration pane, and it will apply red masks over all of the reflections detected in the 2D camera view. Once masked, the pixels in the masked regions will entirely be filtered out from the data. Please note that Masks get applied additively, so if there are already masks applied in the camera view, clear them out first before applying a new one.
If masks were previously applied during another calibration or manually via the 2D viewport, you have the option of clearing these masks.
This will help remove masks that are no longer useful or need to be reset in order to cover new reflections.
This button will allow you go back to the Calibration pane's default window.
This button will auto-apply masks to objects in the capture volume.
You can always click Clear Masks then Mask again to reapply new Masks if you're unhappy with the initial masking or to reset the masks from a previous calibration.
This button will continue with the Calibration process with the masks applied.
Full
When Full is chosen from the dropdown, this will allow for a full volume calibration where each camera is used for the calibration.
Refine
When Refine is chosen from the dropdown, this will allow for only specific cameras to be calibrated. For more information regarding Refine calibrations please visit our Calibration wiki page.
This dropdown allows you to select which wand you'll be using to calibrate your volume. Please refer to the Wand Types section on the Calibration page of this wiki.
This button allows you to go back to the masking window incase you need to make changes to your masks.
This button will initiate the calibration with all previous settings applied.
Once you begin to wand the camera squares in the Calibration pane will turn dark green when a camera has begun successfully collecting samples, but still does not have a sufficient amount of samples collected. Once there is a sufficient amount of samples collected the square will turn light green. Once all the camera squares have filled in light green the Start Calculating button will be enabled.
This will show the amount of samples a camera has captured. Typically you want around 1,000-4,000 samples. Samples above this threshold are unnecessary and can oftentimes be detrimental to a calibration's accuracy.
This button will start calculating the samples taken during the wanding stage. During this the camera squares will cycle through red, dark cyan, and light cyan.
Calibration samples were Poor and have a high Mean Ray Error.
Calibration samples are Good
Calibration samples are Excellent.
Calibration samples are Exceptional.
When chosen from the dropdown, Motive will automatically recognize the ground plane you are using.
On occasion Motive will recognize a ground plane as a different ground plane. When this occurs, you can choose the appropriate ground plane from the Detected Device's dropdown.
You can create your own custom ground plane by positioning three markers in a right triangle shape. To refine the position, change the vertical offset (how far from the ground are the markers on its base.
It is also possible to create a ground plane from a Rigid Body. Select the Rigid Body you wish to use. Motive will use the pivot point of the Rigid Body as the ground plane.
By toggling the white dot at the bottom of the Calibration pane, you can access the Refine Ground Plane window.
It is possible to refine the ground plane. To do this, you'll want to lay out additional markers in the capture volume. It is important to use markers of the same dimension and height for an accurate refinement. This allows Motive to make sure that the ground plane is level.
By further toggling the white dot at the bottom of the Calibration pane, you can access the Translate and Rotate window.
To further position the ground plane, you can manually enter translate and rotate values. This step is typically not necessary for an accurate ground plane placement.
This toggle enables Continuous Calibration.
When the status is Idle, Motive is waiting to initiate continuous calibration.
Motive is sampling the position of at least four markers.
Motive is calculating the newly acquired samples.
By toggling the white dot at the bottom of the Calibration pane, you can switch to the anchor marker window.
Active anchor markers can be set up in Motive to further improve continuous calibration. When properly configured, anchor markers improve continuous calibration updates, especially on systems that consists of multiple sets of cameras that are separated into different tracking areas, by obstructions or walls, without camera view overlap. It also provides extra assurance that the global origin will not shift during each updates; although the continuous calibration feature itself already checks for this_._
For more information regarding anchor markers, visit the Anchor Marker Setup section of this wiki.
In Motive, the Edit Tools pane can be accessed under the View tab or by clicking icon on the main toolbar.
The Edit Tools pane contains the functionality to modify 3D data. Four main functions exist: trimming trials, filling gaps, smoothing trajectories and swapping data points. Trimming trials refers to the clearing of data points before and after a gap. Filling gaps is the process of filling in a markers trajectory for each frame that has no data. Smoothing trajectories filters out unwanted noise in the signal. Swapping allows two markers to swap their trajectories.
Read through the Data Editing page to learn about utilizing the edit tools.
Default: 3 frames. The Trim Size Leading/Trailing defines how many data points will be deleted before and after a gap.
Default: OFF. The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points in case jitters are introduced from the occlusion. When enabled, this feature will determine whether each end of the gap is suspicious with errors, and delete an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Default: 5 frames. The Minimum Segment Size determines the minimum number of frames required by a trajectory to be modified by the trimming feature. For instance, if a trajectory is continuous only for a number of frames less than the defined minimum segment size, this segment will not be trimmed. Use this setting to define the smallest trajectory that gets.
Default: 2 frames. The Gap Size Threshold defines the minimum size of a gap that is affected by trimming. Any gaps that are smaller than this value are untouched by the trim feature. Use this to limit trimming to only the larger gaps. In general it is best to keep this at or above the default, as trimming is only effective on larger trajectories.
Automatically search through the selected trajectory and highlights the range and moves the cursor to the center of a gap before the current frame.
Automatically search through the selected trajectory and highlights the range and moves the cursor to the center of a gap after the current frame.
Fills all gaps in the current TAK. If you have a specific frame range selected in the timeline, only the gaps within the selected frame range will be filled.
Sets which interpolation method to be used. Available patterns are constant, linear, cubic, pattern-based, and model-based. For more information, read Data Editing page
The maximum size, in frames, that a gap can be for Motive to fill. Raising this will allow larger gaps to be filled. However, larger gaps may be more prone to incorrect interpolation.
When using the pattern-base interpolation to fill gaps on a marker's the trajectory, Other reference markers are selected alongside the target marker to interpolate. This Fill Target drop-down menu specifies which marker among the selected markers to set as the target marker to perform the pattern-base interpolation.
Applies smoothing to all frames on all tracks of the current selection in the timeline.
Determines how strongly your data will be smoothed. The lower the setting, the more smoothed the data will be. High frequencies are present during sharp transitions in the data, such as footplants, but can also be introduced by noise in the data. Commonly used ranges for Filter Cutoff Frequency are 6-12 Hz, but you may want to adjust that up for fast, sharp motions to avoid softening transitions in the motion that need to stay sharp.
Delete all trajectories within the selected frame range that have frames less then the percentage defined in the settings.
For all trajectories that have frames shorter than the percentage defined in this setting will be deleted.
Jumps to the most recent detected marker swap.
Jumps to the next detected marker swap.
Select the markers to be swapped.
Choose the direction, from the current frame, to apply the swap
Swaps the two markers selected in the Markers to Swap
The Devices pane can be accessed under the View tab in Motive or by clicking the button on the main toolbar.
In Motive, all of the connected devices get listed in the Devices pane, including tracking cameras, synchronization hubs, color reference cameras, and other supported peripheral devices such as force plates and data acquisition devices. Using this pane, core settings of each component can be adjusted, which includes sampling rates and camera exposures. Cameras can be grouped to control the system more quickly. You can also select individual devices to view and modify their properties in the Properties pane. Lastly, when specific devices are selected in this pane, their respective properties will get listed under Properties pane, where you can also make changes to the settings.
At the very top of the devices pane, the master camera system frame rate is indicated. All synchronized devices will be capturing at a whole multiple or a whole divisor of this master rate.
The master camera frame rate is indicated at top of the Devices pane. This rate sets the framerate which drives all of the tracking cameras. If you wish to change this, you can simply click on the rate to open the drop-down menu and set the desired rate.
Reference cameras using MJPEG grayscale video mode, or Prime Color cameras, can capture either at a same frame rate as the other tracking cameras or at a whole fraction of the master frame rate. In many applications, capturing at a lower frame rate is better for reference cameras because it reduces the amount of data recorded/outputted decreasing the size of the capture files overall. This can be adjusted by configuring the Multiplier setting.
eSync2 users: If you are using the eSync2 synchronization hub to synchronize the camera system to another signal (e.g. Internal Clock), you can apply multiplier/divisor to the input signal to adjust the camera system frame rate.
By clicking on the down-arrow button under the camera frame rate, you can expand list of grouped devices. At first, you may not have any grouped devices. To create new groups, you can select multiple devices that are listed under this panel, right-click to bring up the context menu, and create a new group. Grouping the cameras allows easier control over multiple devices in the system.
Under the tracking cameras section, it lists out all of the motion capture cameras connected to the system. Here, you can configure and control the cameras. You can right-click on the camera setting headers to show/hide specific camera settings and drag them around to change the order. When you have multiple cameras selected, making changes to the settings will modify them for all of the selected cameras. You can also group the cameras to easily select and change the settings quickly. The configurable options include:
Framerate multiplier
Exposure length (microseconds)
IR LED ring on/off
Real-time reconstruction contribution
Imager Gain
IR Filter on/off
The multiplier setting applies selected multiplier to the master sampling rate. Multipliers cannot be applied to the tracking cameras, but you can apply them to the reference cameras that are capturing in MJPEG video processing mode. This allows the reference cameras to capture at a slower framerate. This reduces the number of frames captured by the reference camera which reduces the overall data size.
The mode setting indicate which video mode that the cameras are set to. You can click on the icons to toggle between the tracking mode and the reference grayscale mode. Available video modes may be slightly different for different camera types, but available types include:
Sets the amount of time that the camera exposes per frame. The minimum and maximum values will depend on both the type of camera and the frame rate. Higher exposure will allow more light in, creating a brighter image that can increase visibility for small and dim markers. However, setting exposure too high can introduce false reflections, larger marker blooms, and marker blurring--all of which can negatively impact marker data quality.
Exposure value is measured in scanlines for V100 and V120 series cameras, and in microseconds for Flex13, S250e and PrimeX Series cameras.
This setting enables or disables illumination of the LEDs on the camera IR LED ring. In certain applications, you may want to disable this setting to stop the IR LEDs from strobing. For example, when tracking active IR LED markers, there is no need for the cameras to emit IR lights, so you may want to disable this to stop the IR illuminations which may introduce additional noise in the data.
The IR intensity setting is now a on/off setting. Please adjust the exposure setting to adjust the brightness of the image in the IR spectrum.
This enables/disables contribution of respective cameras to the real-time reconstruction of the 3D data. When cameras are disabled from contributing to the reconstruction, the cameras will still be collecting capture data but they will not be processed through the real-time reconstruction. Please note that 2D frames will still get recorded into the capture file, and you can run post-processing reconstruction pipeline to obtain fully contributed 3D data in the Edit mode.
In most applications, you can have all of the cameras contributing to the 3D reconstruction engine without any problem. But for a very high-camera count systems, having all camera to contribute to the reconstruction engine can slow down the real-time processing of point cloud solve and result in dropped frames. In this case, you can have a few cameras disabled from real-time reconstruction to prevent frame drops and use the collected 2D data later in post-processing.
Increasing a camera’s gain will brighten the image, which can improve tracking range at very long distances. Higher gain levels can introduce noise into the 2D camera image, so gain should only be used to increase range in large setup areas, when increasing exposure and decreasing lens f-stop does not sufficiently brighten up the captured image.
Sets the camera to view either visible or infrared light on cameras equipped with a Filter Switcher. Infrared Spectrum should be selected when the camera is being used for marker tracking applications. Visible Spectrum can optionally be selected for full frame video applications, where external, visible spectrum lighting will be used to illuminate the environment instead of the camera’s IR LEDs. Common applications include reference video and external calibration methods that use images projected in the visible spectrum.
Prime color reference cameras will also get listed under the devices pane. Just like other cameras in the Tracking group, you can configure the camera settings, including the sampling rate multiplier to decrease the sampling rate of the camera. Additionally, captured image resolution and the data transfer bit-rate can be configured.
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum frame rate allowed by the network bandwidth will vary.
Bit-rate setting determines the transmission rate outputted from the selected color camera. This is how you can control the data output from color cameras to avoid overloading the camera network bandwidth. At a higher bit-rate setting, more amount of data is outputted and the image quality is better since there is less amount of image compression being done. However, if there is too much data output, it may overload the network bandwidth and result in frame drops. Thus, it is best to minimize this while keeping the image quality at a acceptable level.
Detected synchronization hubs will also get listed under the devices pane. You can select the synchronization hubs in the Devices pane, and configure its input and output signals through the Properties pane. For more information on this, please read through the Synchronization page.
Detected force plates and NI-DAQ devices will get listed under the Devices pane as well. You can apply multipliers to the sampling rate if the they are synchronized through trigger. If they are synchronized via a reference clock signal (e.g. Internal Clock), their sampling rate will be fixed to the rate of that signal.
For more information, please read through the force plate setup pages (AMTI Force Plate Setup, Bertec Force Plate Setup, Kistler Force Plate Setup) or the NI-DAQ Setup setup page.\
The Builder pane can be accessed under the View tab or by clicking the icon on the main toolbar.
The Builder pane is used for creating and editing trackable models, also called trackable assets, in Motive. In general, Rigid Body assets are created for tracking rigid objects, and Skeleton assets are created for tracking human motions.
When created, trackable models store the positions of markers on the target object and use the information to auto-label the markers in 3D space. During the auto-label process, a set of predefined labels gets assigned to 3D points using the solver pipeline, and the labeled dataset is then used for calculating the position and orientation of the corresponding Rigid Bodies or Skeleton segments.
The trackable models can be used to auto-label the 3D capture both in Live mode (real-time) and in the Edit mode (post-processing). Each created trackable models will have its own properties which can be viewed and changed under the Properties pane. If new Skeletons or Rigid Bodies are created during post-processing, the Take will need to be auto-labeled again in order to apply the changes to the 3D data.
On the Builder pane, you can either create a new trackable asset or modify an existing one. Select the Type of asset you wish to work on, and then select whether you wish to create or make modifications to existing assets. Create and modify tools for different types asset will be explained in the sections below.
For creating Rigid Bodies, select the Rigid Body from the Type option and access the Create tab at the top. Here, you can create Rigid Body assets and track any markered-objects in the volume. In addition to standard Rigid Body assets, you can also create Rigid Body models for head-mounted displays (HMDs) and measurement probes as well.
Step 1.
Select all associated Rigid Body markers in the 3D viewport.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the Rigid Body from.
Step 3.
Click Create to define a Rigid Body asset from the selected markers.
You can also create a Rigid Body by doing the following actions while the markers are selected:
Prespective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create Rigid Body hotkey (Default: Ctrl +T).
Step 4.
Once the Rigid Body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created Rigid Body will be listed under the Assets pane.
Defining Assets in Edit mode:
If the Rigid Bodies, or Skeletons, are created in the Edit mode, the corresponding Take needs to be auto-labeled. Only then, the Rigid Body markers will be labeled using the Rigid Body asset and positions and orientations will be computed for each frame. If the 3D data have not been labeled after edits on the recorded data, the asset may not be tracked.
This feature can be used only with HMDs that have the OptiTrack Active HMD clips mounted.
For using OptiTrack system for VR applications, it is important that the pivot point of HMD Rigid Body gets placed at the appropriate location, which is at the root of the nose in between the eyes. When using the HMD clips, you can utilize the HMD creation tools in the Builder pane to have Motive estimate this spot and place the pivot point accordingly. It utilizes known marker configurations on the clip to precisely positions the pivot point and sets the desired orientation.
HMDs with passive markers can utilize the External Pivot Alignment tool to calibrate the pivot point.
First of all, make sure Motive is configured for tracking active markers.
Open the Builder pane under View tab and click Rigid Bodies.
Under the Type drop-down menu, select HMD. This will bring up the options for defining an HMD Rigid Body.
If the selected marker matches one of the Active clips, it will indicate which type of Active Clip is being used.
Under the Orientation drop-down menu, select the desired orientation of the HMD. The orientation used for streaming to Unity is +Z forward and Unreal Engine is +X forward, or you can also specify the expected orientation axis on the client plugin side.
Hold the HMD at the center of the tracking volume where all of the active markers are tracked well.
Select the 8 active markers in the 3D viewport.
Click Create. An HMD Rigid Body will be created from the selected markers and it will initiate the calibration process.
During calibration, slowly rotate the HMD to collect data samples in different orientations.
Once all necessary samples are collected, the calibrated HMD Rigid Body will be created.
You can also define a measurement probe using the Builder pane. The measurement probe tool utilizes the precise tracking of OptiTrack mocap systems and allows you to measure 3D locations within a capture volume. For more information, please read through the Measurement Probe Kit Guide.
Open the Builder pane under View tab and click Rigid Bodies.
Bring the probe out into the tracking volume and create a Rigid Body from the markers.
Under the Type drop-down menu, select Probe. This will bring up the options for defining a Rigid Body for the measurement probe.
Select the Rigid Body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining Rigid Body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Once the probe is calibrated successfully, a probe asset will be displayed over the Rigid Body in Motive, and live x/y/z position data will be displayed under the Probe pane.
Caution
The probe tip MUST remain fitted securely in the slot on the calibration block during the calibration process.
Also, do not press in with the probe since the deformation from compressing could affect the result.
Note: Custom Probes
It's highly recommended to use the Probe kit when using this feature. With that being said, you can also use any markered object with a pivot arm to define a custom probe in Motive, but when a custom probe is used, it may have less accurate measurements; especially if the pivot arm and the object are not rigid and/or if any slight translation occurs during the probe calibration steps.
The Builder pane has tools that can be used to modify the tracking of a Rigid Body that's selected in Motive. To modify Rigid Bodies, select a single Rigid Body and access the Modify tab at the top. This will bring up the options for editing a Rigid Body.
This feature is supported in _Live Mode_** only.**
The Rigid Body refinement tool improves the accuracy of Rigid Body calculation in Motive. When a Rigid Body asset is initially created, Motive references only a single frame for defining the Rigid Body definition. The Rigid Body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the Rigid Body as well as the position and orientation of the Rigid Body itself.
Steps
Select View from the toolbar at the top, open the Builder pane.
Select the Rigid Bodies from the Type dropdown menu.
In Live mode, select an existing Rigid Body asset that you wish to refine from the Assets pane.
Hold the physical selected Rigid Body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the Rigid Body.
Click Refine in the Builder pane.
Slowly rotate the Rigid Body to collect samples at different orientations until the progress bar is full.
Once all necessary samples are collected, the Refine and Create + Refine buttons will appear again in the Builder pane and the refinements will have been applied.
The Probe Calibration feature under the Rigid Body edit options can be used to re-calibrate a pivot point of a measurement probe or a custom Rigid Body. This step is also completed as one of the calibration steps when first creating a measurement probe, but you can re-calibrate it under the Modify tab.
In Motive, select the Rigid Body or a measurement probe.
Bring out the probe into the tracking volume where all of its markers are well-tracked.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Click Start
Once it starts collecting the samples, slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
When sufficient samples are collected, the mean error of the calibrated pivot point will be displayed.
Click Apply to use the calibrated definition or click Cancel to calibrate again.
The Modify tab is used to apply translation or rotation to the pivot point of a selected Rigid Body. A pivot point of a Rigid Body represents both position (x,y,z) and orientation (pitch, roll, yaw) of the corresponding asset.
You can also use the Gizmo tools to quickly make modify the pivot point of a Rigid Body.
Use this tool to translate a pivot point in x/y/z axis (in mm). You can also reset the translation to set the pivot point back at the geometrical center of the Rigid Body.
Use this tool to apply rotation to the local coordinate system of a selected Rigid Body. You can also reset the orientation to align the Rigid Body coordinate axis and the global axis.When resetting the orientation, the Rigid Body must be tracked in the scene.
The OptiTrack Clip Tool basically recalibrates HMDs with OptiTrack HMD Clips to position its pivot point at an appropriate location. The steps are basically the same as when first creating the HMD Rigid Body.
This feature is useful when tracking a spherical object (e.g. ball). It will assume that all of the markers on the selected Rigid Body are placed on a surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. Simply select a Rigid Body in Motive, open the Builder pane to edit Rigid Body definitions, and then click Apply to place the pivot point at the center of the spherical object.
To create Skeletons in Motive, you need to select the Skeleton option from the type dropdown menu and access the Create tab at the top. Here, you select which Skeleton Marker Set to use, choose the calibration post, and create the Skeleton model.
Step 1.
From the Skeleton creation options on the Builder pane, select a Skeleton Marker Set template from the Template drop-down menu. This will bring up a Skeleton avatar displaying where the markers need to be placed on the subject.
Step 2.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best Skeleton tracking. Thus, extra attention is needed when placing the Skeleton markers.
The magenta markers indicate the segment markers that can be placed at a slightly different position within the same segment.
Step 3.
Double-check the marker counts and their placements. It may be easier to use the 3D viewport in Motive to do this. The system should be tracking the attached markers at this point.
Step 4.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the Skeleton markers are not automatically detected, manually select the Skeleton markers from the 3D perspective view.
Step 5.
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the Marker Set template. Or, you can also assign custom labels by loading previously prepared marker-name XML files in the label section.
Step 6.
Next step is to select the Skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the Skeleton. This is set to the T-pose by default.
Step 7.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created Skeleton will be calibrated from it. For more details, read the calibration poses section.
Step 8.
Click Create to create the Skeleton. Once the Skeleton model has been defined, confirm all Skeleton segments and assigned markers are located at expected locations. If any of the Skeleton segment seems to be misaligned, delete and create the Skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
If you are creating a Skeleton in the post-processing of captured data, you will have to auto-label the Take to see the Skeleton modeled and tracked in Motive.
You can also select a Skeleton and use CTRL + R hotkey to refresh the tracking of Skeleton if needed.
Existing Skeleton assets can be recalibrated using the existing Skeleton information. Basically, the recalibration recreates the selected Skeleton using the same Skelton Marker Set. This feature recalibrates the Skeleton asset and refreshes expected marker locations on the assets.
To recalibrate Skeletons, select all of the associated Skeleton markers from the perspective view along with the corresponding Skeleton model. Make sure the selected Skeleton is in a calibration pose, and click Recalibrate. You can also recalibrate from the context menu in the Assets pane or in the 3D Viewport.
Skeleton recalibration does not work with Skeleton templates with added markers.
You can add or remove asset model markers from a Rigid Body or a Skeleton using the Builder pane. This is basically adding or removing markers to the existing Rigid Body and/or Skeleton definition. To do this, you will need to make sure the selection of Asset Model Markers is enabled in the Perspective viewport. Then, follow the below steps to add or remove markers:
Enable selection of Asset Model Markers.
Access the Modify tab on the Builder pane.
Select a Skeleton or a Rigid Body that you wish to modify the asset markers for.
CTRL + left-click on an asset model(s) marker that's associated with the selected asset.
On the Asset Model Markers in the Builder pane, click + for adding the marker to the definition or - for removing the asset model marker.
Use the Constraints pane to modify marker label and/or colors.
This feature works for Skeleton assets only
For Skeleton marker sticks, you can use the Builder pane to add/remove markers sticks and also modify the color of the sticks as needed.
The Data pane is used for managing the Take files. This pane can be accessed under the View tab in Motive or by clicking the icon on the main toolbar.
Simple
Use the simplest data management layout.
Advanced
Additional column headers are added to the layout.
Classic
Use the classic Motive layout where Take name, availability of 2D data and 3D data is listed.
New...
Create a new customizable layout.
Rename
Rename a custom layout.
Delete
Delete a custom layout.
2D Mode
Import Shot List...
Import a list of empty Take names from a CSV file. This is helpful when you plan a list of shots in advance to the capture.
Export Take Info...
Exports a list of Take information into an XML file. Included elements are name of the session, name of the take, file directory, involved assets, notes, time range, duration, and number of frames included.
The left section of the Data pane is used to list out the sessions that are loaded in Motive. Session folders group multiple associated Take files in Motive, and they can be imported simply by dragging-and-dropping or importing a folder into the data management pane. When a session folder is loaded, all of the Take files within the folder are loaded all together.
What happened to the Project TTP Files?
The TTP project file format is deprecated starting from the 2.0 release. Now, recorded Takes will be managed by simply loading the session folders directly onto the new Data pane. For exporting and importing the software setting configurations, the Motive profile file format will replace the previous role of the TTP file. In the Motive profile, software configurations such as reconstruction settings, application settings, data streaming settings, and many other settings will be contained. Camera calibration will no longer be saved in TTP files, but they will be saved in the calibration file (CAL) only. TTP files can still be loaded in Motive 2.0. However, we suggest moving away from using TTP files.
Set the selected session as the current session.
Rename the session folder.
This creates a folder under the selected directory.
Opens the session folder from the file explorer
Delete the session folder. All of its contents will be deleted as well.
When a session folder is selected, associated Take files and their descriptions are listed in a table format on the right-hand side of the Data pane. For each Take, general descriptions and basic information are shown in the columns of the respective row. To view additional descriptions, click on the pane menu, select the Advanced option, and all of the descriptions will be listed. For each of the enabled columns, you can click on the arrow next to it to sort up/down the list of Takes depending on the category.
Best
Health
Progress
The progress indicator can be used to track the process of the Takes. Use the indicators to track down the workflow specific progress of theTakes.
Ready
Recorded
Reviewed
Labeled
Cleaned
Exported
Name
Shows the name of the Take.
2D
3D
Video
Solved
Audio
Analog
Data Recorded
Shows the time and the date when the Take was recorded.
Frame Rate
Shows the camera system frame rate which the Take was recorded in.
Duration
Time length of the Take.
Total Frames
Total number of captured frames in the Take.
Notes
Section for adding commenting on each Take.
Start Timecode
A search bar is located at the bottom of the Data pane, and you can search a selected session folder using any number of keywords and search filters. Motive will use the text in the input field to list out the matching Takes from the selected session folder. Unless otherwise specified, the search filter will scope to all of the columns.
Search for exact phrase
Wrap your search text in quotation marks.
e.g. Search "shooting a gun"
for searching a file named Shooting a Gun.tak.
Search specific fields
To limit the search to specific columns, type field:
, plus the name of a column enclosed with quotation marks, and then the value or term you're searching for.
Multiple fields and/or values may be specified in any order.
e.g. field:"name" Lizzy
, field:"notes" Static capture
.
Search for true/false values
To search specific binary states from the Take list, type the name of the field followed by a colon (:), and then enter either true ([t], [true], [yes], [y]) or false ([f], [false], [no], [n]).
e.g. Best:[true]
, Solved:[false]
, Video:[T]
, Analog:[yes]
The table layout can also be customized. To do so, go to the pane menu and select New or any of the previously customized layouts. Once you are in a customizable layout, right-click on the top header bar and add or remove categories from the table.
A list of take names can be imported from either a CSV file or carriage return texts that contain a take name on each line. Using this feature, you can plan, organize, and create a list of capture names ahead of actual recording. Once take names have been imported, a list of empty takes with the corresponding names will be listed for the selected session folder.
From Text
Take lists can be imported by copying a list of take names and pasting them onto the Data pane. Take names must be separated by carriage returns; in other words, each take name must be in a new line.
From a CSV File
Saves the selected take
Reverts any changes that were made. This does not work on the currently opened Take.
Selects the current take and loads it for playback or editing.
Allows the current take to be renamed.
Opens an explorer window to the current asset path. This can be helpful when backing up, transferring, or exporting data.
Separate reconstruction pipeline without the auto-labeling process. Reconstructs 3D data using the 2D data. Reconstruction is required to export Marker data.
Separate auto-labeling pipeline that labels markers using the existing tracking asset definitions. Available only when 3D data is reconstructed for the Take. Auto-label is required to export Markers labeled from Assets.
Combines 2D data from each camera in the system to create a usable 3D take. It also incorporates assets in the Take to auto-label and create rigid bodies and skeletons in the Take. Reconstruction is required to export Marker data and Auto-label is required when exporting Markers labeled from Assets.
Solves 6 DoF tracking data of skeletons and rigid bodies and bakes them into the TAK recording. When the assets are solved, Motive reads from recorded Solve instead of processing the tracking data in real-time. Solving is required prior to exporting Assets.
Performs all three reconstruct, auto-label, and solve pipelines in consecutive order. This basically recreates 3D data from recorded 2D camera data.
Opens the Export dialog window to select and initiate file export. Valid formats for export are CSV, C3D, FBX, BVH.
Reconstruction is required to export Marker data, Auto-label is required when exporting Markers labeled from Assets, and Solving is required prior to exporting Assets.
Please note that if you have Assets that are unsolved and just wish to export reconstructed Marker data, you can toggle off Rigid Bodies and Bones (Skeletons) from the Export window (see image below). For more information please see our Data Export page.
Opens the export dialog window to initiate scene video export to AVI.
Exports an audio file when selected Take contains audio data.
Opens the Delete 2D Data pop-up where you can select to delete the 2D data, Audio data, or reference video data. Read more in Deleting 2D data.
Permanently deletes the 3D data from the take. This option is useful in the event reconstruction or editing causes damage to the data.
Unlabels all existing marker labels in 3D data. If you wish to re-auto-label markers using modified asset definitions, you will need to first unlabel markers for respective assets.
Deletes 6 DoF tracking data that was solved for skeleton and rigid bodies. If Solved data doesn't exist, Motive instead calculates tracking of the objects from recorded 3D data in real-time.
Archives the original take file and creates a duplicate version. Recommended prior to completing any post-production work on the take file.
Opens a dialog box to confirm permanent deletion of the take and all associated 2D, 3D, and Joint Angle Data from the computer. This option cannot be undone.
Deletes all assets that were recorded in the take.
Copies the assets from the current capture to the selected Takes.
This page provides information on the Probe pane, which can be accessed under the Tools tab or by clicking on the icon from the toolbar.
This section highlights what's in the Probe pane. For detailed instructions on how to use the Probe pane to collect measurement samples, read through Measurement Probe Kit Guide.
The Probe Calibration feature under the Rigid Body edit options can be used to re-calibrate a pivot point of a measurement probe or a custom Rigid Body. This step is also completed as one of the calibration steps when first creating a measurement probe, but you can re-calibrate it under the Modify tab.
In Motive, select the Rigid Body or a measurement probe.
Bring out the probe into the tracking volume where all of its markers are well-tracked.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Click Start
Once it starts collecting the samples, slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
When sufficient samples are collected, the mean error of the calibrated pivot point will be displayed.
Click Apply to use the calibrated definition or click Cancel to calibrate again.
The Digitized Points section is used for collecting sample coordinates using the probe. You can select which Rigid Body to use from the drop-down menu and set the number of frames used to collect the sample. Clicking on the Sample button will trigger Motive to collect a sample point and save it into the C:\Users\[Current User]\Documents\OptiTrack\measurements.csv
file.
When needed, export the measurements of the accumulated digitized points into a separate CSV file, and/or clear the existing samples to start a new set of measurements
Shows the live X/Y/Z position of the calibrated probe tip.
Shows the live X/Y/Z position of the last sampled point.
Shows the distance between the last point and the live position of the probe tip.
Shows the distance between the last two collected samples.
Shows the angle between the last three collected samples
This page includes detailed step-by-step instructions on customizing constraint XML files for assets.In order to customize the marker labels, marker colors, and marker sticks for an asset a constraint XML file may be exported, customized, and loaded back into Motive. Alternately, the Constraints pane can be used to modify the marker names and color and the Builder pane can be used to customize marker sticks directly in Motive. This process has been standardized between assets types with the only exception being that marker sticks for Rigid Bodies does not work in Motive 3.0.
a) First, create an asset using the Builder pane or the 3D context menu.
b) Right-click on the asset in the Assets pane and select Export Markers. Alternately, you can click the "..." menu at the top of the Constraints pane.
c) In the export dialog window, select a directory to save the constraints XML file. Click Save to export.
a) Open the exported XML file using a text editor. It will contain corresponding marker label information under the <marker_names> section.
b) Customize the marker labels from the XML file. Under the <marker_names> section of the XML, modify labels for the name variables with the desired name, but do not change labels for old_name variables. The order of the markers should remain the same unless you would like to change the labeling order.
c) If you changed marker labels, the corresponding marker names must also be renamed within the <marker_colors> and <marker_sticks> sections as well. Otherwise, the marker colors and marker sticks will not be defined properly.
a) To customize the marker colors and sticks, open the exported XML file using a text editor and scroll down to the <marker_colors> and/or <marker_sticks> sections. If the <marker_colors> and/or <marker_sticks> sections do not exist in the exported XML file, then you could be using an old Skeleton created before Motive 1.10. Updating and exporting old Skeleton will provide these sections in the XML.
b) You can customize the marker colors and the marker sticks in these sections. For each marker name, you must use exactly same marker labels that were defined by the <marker_names> section of the same XML file. If any marker label was changed in the <marker_names> section, the changed name must be reflected in the respective colors and sticks definitions as well. In other words, if a Custom_Name was assigned under name for a label in the <marker_names> section <marker name="Custom_Name" old_name="Name" />, the same Custom_Name must be used to rename all the respective marker names within <marker_colors> and/or <marker_sticks> sections of the XML.
Marker Colors: For each marker in a Skeleton, there will be a respective name and color definitions under the <marker_colors> section of the XML. To change corresponding marker colors for the template, edit the RGB parameter and save the XML file.
Marker Sticks: A marker stick is simply a line interconnecting two labeled markers within the Skeleton. Each marker stick definition consists of two marker labels for creating a marker stick and a RGB value for its color. To modify the marker sticks, edit the marker names and the color values. You can also define additional marker sticks by copying the format from the other marker stick definitions.
Now that you have customized the XML file, it can be loaded each time when creating new Skeletons. In the Builder pane under Skeleton creation options, select the corresponding Marker Set. Next, under the Constraints drop down menu select "Choose File..." to find and import the XML file. When you Create the Skeleton, the custom marker labels, marker colors, and marker sticks will be applied.
If you manually added extra markers to a Skeleton, then you must import the constraint XML file after adding the extra markers or just modify the extra makers using the Constraints pane and Builder pane.
Note: For Skeletons, modified Marker XML files can only be used with the same Marker Set template. In other words, if you exported a Baseline (41) Skeleton and modified the constraints XML file, same Baseline (41) Marker Set will typically need to be created in order to import the customized XML file.
You can also apply customized constraint XML file to an existing assets using the import constraints feature. Right-click on an asset in the Assets pane (or click the "..." menu in the Constraints pane) and select Import Constraints from the menu. This will bring up a dialog window for importing a constraint XML file. Import the customized XML template and the modifications will be applied to the asset. This feature must be used if extra markers were added to the default XML template.
In Motive, the Status Log pane can be accessed under the View tab or by clicking the icon on the main toolbar.
The Status Log pane logs important events or statuses of the system operation. Actively occurring events are listed under the Current section and all of the events are logged under the History section for the record. The log can be exported into a text file for troubleshooting references.
In general, when there are no errors in the system operation, the Current section of the log will remain free of warning or error messages. Occasionally during system operations, however, the error/warning messages (e.g. Dropped Frame, Discontinuous Frame ID) may pop-up momentarily and disappear afterward. This could occur when Motive is changing its configurations; for example, when switching between Live and Edit modes or when re-configuring the synchronization settings. This is a common behavior, and this does not necessarily indicate system errors as long as the messages do not persist in the Current section. If the error message is continuously persisting under the Current section or have a high number of event counts, it is indicating an issue with the system operation.
Status messages are categorized into three categories: Informational, Warning, and Error. Logged status messages on the history list can be filtered through choosing a specific category under the Display Filter section. Status messages will appear in a chronological order with corresponding timestamps, which indicate the number of seconds past since the software start.
Symbol Convention
Note: This table is not an exhaustive list of messages in the Log pane.
Camera Calibration Updated ( {#} mm/ray mean error)
Continuation calibration feature has updated and improve the camera calibration.
Plugin Device Created: {Name}
The plugin device object for an external device (e.g. force plate and NIDAQ) has been successfully created.
Plugin Device Registered: {Name}
The plugin device has been registered in Motive.
Loaded Plugin: {Directory}
Plugin DLL in the {Directory} has been loaded.
Streaming: Duplicate Frame
Notifying that a duplicate frame has been sent out through the data stream.
Streaming: Discontinuous Frame ID.
Notifying that the streamed frame ID was discontinuous.
Network client connect request received.
A NatNet client application has requested to connect to the server application, Motive.
Network client disconnect request received.
A NatNet client application has has requested to disconnect from the server application, Motive.
Network client validation request received.
A NatNet client application is requesting validation in order to connect to the server application, Motive.
Continuous Calibration: (Status)
Sampling: Indicates that the Continuous Calibration feature is sampling reconstructions for updating the calibration.
Refining: Indicates that the continuous calibration feature is refining and updating the calibration.
Calibration Validated. No refinement applied: this indicates that a refinement was deemed less quality than a previous sampling. Continuous Calibration was not updated.
Calibration partition updated
Calibration: Need more samples from cameras x, y, z...
Indicates that a camera needs more marker samples. To remedy this, add more markers to the volume.
Calibration: Need more distributed samples from Cameras x, y, z...
Indicates that markers are not fully dispersed in a camera's view. To remedy this, add more markers spread more evenly that cover more of the camera's view.
CAM Camera #: Not Receiving Frame Data.
Indicates that the Camera (#) is not receiving frame data. This could be just because the cameras are still waiting to be initialized. If this status persists, it is like due to a hardware problem.
CAM Camera #: Packet Header CRC Fail
Error in the received camera data packet. Data packets from the cameras are invalid.
CAM Synchronization: Invalid Packet Received
Invalid packet was received. Indicates an encounter of networking error on the camera synchronization.
CAM Synchronization: Packet Header CRC Fail
Error in the received synchronization data packet. Indicates an encounter of networking error on the camera synchronization.
CAM Synchronization: Packet Length Fail
Received packet length invalid. Indicates an encounter of networking error on the camera synchronization.
2D: Camera Stalled
Cameras are stalled. Please check the cable connection and make sure appropriate cable type is used. You would also want to make sure the cables have electromagnetic interference shielding. When cables without the shielding are bundled close together, they can interfere with each other and cause the cameras to stall. Please note that flat Ethernet cables often do not have electromagnetic interference shielding.
CAM Camera #: Dropped Frame
The received frame was invalid and it was dropped. Cameras are not working correctly.
CAM Synchronization: Dropped Frame
Data synchronization failed and the frame has been dropped.
The Properties pane can be accessed by clicking on the icon on the toolbar.
The Properties pane lists out the settings configured for selected objects. In Motive, each type of asset has a list of associated properties, and you can access and modify them using the Properties pane. These properties determine how the display and tracking of the corresponding items are done in Motive. This page will go over all of the properties, for each type of asset, that can be viewed or configured in Motive.
Properties will be listed for recorded Takes, Rigid Body assets, Skeleton assets, force plate device, and NI-DAQ device. Detailed descriptions on each corresponding properties are documented on the following pages:
Selected Items
The Properties pane contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
When a force plate is selected in Motive, its device information gets listed under the . For configuring force plate properties, use the and modify the corresponding device properties.
For more information, read through the force plate setup pages:
Advanced Settings
The Properties: Force Plates contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Force Plate Group Properties:
Group policy is enforced for the force plates that are from the same vendors. This means most of the force plate properties are shared within the force plate groups. Shared settings include the enabled status, sampling rates, and sync modes. These settings should be configured the same for all force plates in most cases. If you need to disable a specific force plate among the group, this will need to be done by powering off the amplifier or disabling the device from the Windows Device Manager.
Enables or disables selected force plate. Only enabled force plates will be shown in Motive and be used for data collection.
Select whether the force plate is synchronized through a recording trigger. This must be set to Device when force plates are synchronized through recording trigger signal from the eSync. This must be set to None when synchronizing through a clock signal.
When set to true, the force plate system synchronizes by reference to an external clock signal. This must be enabled for the reference clock sync. When two systems syncs using the recording trigger, this must be turned off.
Indicates the output port on the eSync that is used for synchronizing the selected force plate. This must match the output port on the eSync that is connected to the force plate amplifier and sending out the synchronization signal.
Resulting data acquisition rate of the force plates. For reference clock sync setups, it will match the frequency of the clock signal. For triggered sync setups, this will match the multiple of the camera system frame rate.
Assigned number of the force plates.
Name of the Motive asset associated with the selected device. For Manus Glove integration, this must match the name of the Skeleton.
Name of the selected force plate.
Model number of the force plate
Force plate serial number.
Number of active channels available in the selected device. For force plates, this defaults to 6 with channels responsible for measuring 3-dimensional force and moment data.
Indicates the state that the force plate is in. If the force plate is streaming the data, it will be indicated Receiving Data. If the force plate is on standby for data collection, it will be indicated Ready.
Size scale of the resultant force vector shown in the 3D viewport.
Length of the force plate.
Width of the force plate.
Manufacturer defined electrical-to-mechanical offset values.
Lists out positions of the four force plate corners. Positions are measured with respect to the global coordinate system, and this is calibrated when you Set Position using the CS-400 calibration square.
This page provides information on the Info pane, which can be accessed from the View tab or by clicking on the icon on the toolbar.
The Info pane can be used to check tracking in Motive. There are two different types of tools you can use from this pane: measurement tools and Rigid Body information. You can switch between different types from the context menu. The measurement tool allows you to use a calibration wand to check detected wand length and the error when compared to the expected wand length.
The Measurement Tool is used to check calibration quality and tracking accuracy of a given volume. There are two tools in this: the Wand Validation tool and the Marker Movement tool.
This tool works only with a fully calibrated capture volume and requires the calibration wand that was used during the process. It compares the length of the captured calibration wand to its known theoretical length and computes the percent error of the tracking volume. You can analyze the tracking accuracy from this.
In Live mode, open the Measurements pane under the Tools tab.
Access the Accuracy tools tab.
Bring the calibration wand into the volume.
Once the wand is in the volume, detected wand length (observed value) and the calculated wand error will be displayed accordingly.
This tool calculates the measured displacement of a selected marker. You can use this tool to compare the calculated displacement in Motive against how much the marker has actually moved to check the tracking accuracy of the system.
Place a marker inside the capture volume.
Select the marker in Motive.
Under the Marker Measurement section, press Reset. This zeroes the position of the marker.
Slowly translate the marker, and the absolute displacement will be displayed in mm.
The Rigid Bodies tool under Info pane in Motive displays real-time tracking information of a Rigid Body selected in Motive. This lists out real-time tracking information for a selected Rigid Body in Motive. Reported data includes a total number of tracked Rigid Body markers, mean errors for each of them, and the 6 Degree of Freedom (position and orientation) tracking data for the Rigid Body.
Euler Angles
There are many potential combinations of Euler angles so it is important to understand the order in which rotations are applied, the handedness of the coordinate system, and the axis (positive or negative) that each rotation is applied about. The following conventions are used for representing Euler orientation in Motive:
Rotation order: XYZ
All coordinates are *right-handed*
Pitch is degrees about the X axis
Yaw is degrees about the Y axis
Roll is degrees about the Z axis
Position values are in millimeters
In Motive, the Labeling pane can be accessed under the View tab or by clicking icon on the main toolbar.
For more explanation on the labeling workflow, read through the workflow page.
Assign labels to a selected marker for all, or selected, frames in a capture.
Applies labels to a marker within the frame range bounded by trajectory gaps and spikes (erratic change). The Max Spike value sets the threshold for spikes which will be used to set the labeling boundary. The Max Gap size determines the tolerable gap size in a fragment, and trajectory gaps larger than this value will set the labeling boundary. This setting is efficient when correcting labeling swaps.
This sets the tolerable gap sizes for both gap ends of the fragment labeling.
Sets the max allowable velocity of a marker (mm/frame) for it to be considered as a spike.
When using the Spike or Fragment range setting, the label will be applied until the marker trajectory is discontinued with a gap that is larger than the maximum gap defined above. When using the All or Selected range setting, the label will be applied to the entire trajectory or just the selected ranges.
Assigns the selected label onto a marker for current frame and frames forward.
Assigns selected label onto a marker for current frame and frames backward.
Assigns selected label onto the marker for current frame, frames forward, and frames backward.
Skeleton properties determine how Skeleton assets are tracked and displayed in Motive.
To view related properties, select a Skeleton asset in the or in the 3D viewport, and the corresponding properties will be listed under the . These properties can be modified both in Live and Edit mode. Default creation properties are listed under the .
Advanced Settings
The Properties: Skeleton contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Shows the name of selected Skeleton asset.
Enables/disables both tracking of the selecting Skeleton and its visibility under the perspective viewport.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to be booted or first tracked.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to continue to be tracked after the initial boot.
[Advanced] Euler angle rotation order used for calculating the bone hierarchy.
Selects whether or not to display the Skeleton name in the 3D Perspective View.
Selects how the Skeleton will be shown in the 3D perspective view.
Segment: Displays Skeleton as individual Skeleton segments.
Avatar (male): Displays Skeleton as a male avatar.
Avatar (female): Displays Skeleton as a female avatar.
Sets the color of the Skeleton.
This feature is supported in Live mode and 2D mode only. When enabled, the color of the Skeleton segments will change whenever there are tracking errors.
Show or hide Skeleton bones.
[Advanced] Displays orientation axes of each segments in the Skeleton.
[Advanced] Shows the Asset Model Markers as transparent spheres on each Skeleton segment. The asset mode markers are the expected marker locations according to the Skeleton solve.
[Advanced] Draws lines between labeled Rigid Body or Skeleton markers and corresponding expected marker locations. This helps to visualize the offset distance between actual marker locations and the asset model markers.
[Advanced] Displays lines between each Skeleton markers and their associated Skeleton segments.
Applied double-exponential smoothing to translation and rotation of a Rigid Body or a skeletal bone. Disabled at 0.
Compensate for system latency by predicting bone movements into the future. For this feature to work best, smoothing needs to be applied as well. Disabled at 0.
[Advanced] When needed, you can damp down translational and/or rotational tracking of a Rigid Body or a Skeleton bone on selected axis.
The status panel lists out the system parameters for monitoring the live status of system operations. Click on the displayed status at the bottom right corner of Motive, and the Status Panel will pop up. You can drag and place the Status Panel anywhere.
Current incoming data transfer rate (KB/s) for all attached cameras.
Measured latency of the point cloud reconstruction engine.
Measured latency of the Rigid Body solver and the Skeleton solver combined.
Measured software latency. It represents the amount of time it takes Motive to process each frame of captured data. This includes the time taken for reconstructing the 2D data into 3D data, labeling and modeling the trackable assets, displaying in the viewport, and other processes configured in Motive.
Available only on Ethernet Camera systems (Prime series and Slim13E). Measured total system latency. This is the time measured from the middle of the camera exposures to when Motive has fully solved all of the tracking data.
The data rate at which the tracking data is streamed to connected client applications.
Final data acquisition rate of the system.
Available only on Ethernet Camera systems (Prime series or Slim 13E). Average temperature, in Celsius, on the imager boards of the cameras in the system.
When there is an increased latency on any of the processing pipeline that needs an attention, it will be highlighted in purple. Increase processing latency may result in dropped frames when real-time processing the data in live-captures or in 2D Mode. Increased latency usually occurs due to the CPU not being fast enough to process the data in real-time. If you perform post-processing reconstructions, you will be accessing the recorded 3D data or solved data (rigid bodies), and there will be no processing required for the corresponding pipeline and they will be indicated as inactive.
When a is selected from the , related information will be displayed in the .
From the Properties pane, you can get the general information about the Take, including the total number of recorded frames, capture data/time, and the list of assets involved in the recording. Also, when needed, the solver settings that were used in the recorded TAK can be modified, and these changes will be applied when performing post-processing reconstruction.
Take name
The camera frame rate in which the take was captured. The Take file will contain the corresponding number of frames for each second.
The frame ID of the first frame saved on the Take.
The frame ID of the last frame saved on the Take.
A timestamp of when the recording was first captured started.
A timestamp of when the recording was ended.
Names of assets that are included in the Take
Comments regarding the take can be noted here for additional information.
Date and time when the capture was recorded.
The version of Motive which the Take was recorded in. (This applies only to Takes that were captured in versions 1.10 or above)
The build of Motive which the Take was recorded in.
The data quality of the Take which can be flagged by users.
Progress indicator for showing how into the post-processing workflow that this Take has made.
Camera system calibration details for the selected Take. Takes recorded in older versions of Motive may not contain this data.
Shows when the cameras were calibrated.
Displays a mean error value of the detected wand length samples throughout the wanding process.
Displays percentile distribution of the wand errors.
Shows what type of wand was used: Standard, Active, or Micron series.
Displays the length of the calibration wand used for the capture.
Distance from one of the end markers to the center marker, specifically the shorter segment.
Important Note
Please note that the OptiHub2 is not designed for precise synchronization with external devices. It is used to provide only a rough synchronization to a trigger event on the input/output signal. Using an OptiHub2, there will be some amount of time delay between the trigger events and the desired actions, and for this reason, the OptiHub2 is not suitable for the precisely synchronizing to an external device. To accomplish such synchronization, it is recommended to use the instead along with an Ethernet camera system.
By modifying the device properties of the OptiHub, users can customize the sync configurations of the camera system for implementing external devices in various sync chain setups. This page directly lists out the properties of the OptiHub. For general instructions on customizing sync settings for integrating external devices, it is recommended to read through the guide.
This option is only valid if the Sync Input: Source is set to Internal Sync. Controls the frequency in Hertz (Hz) of the OptiHub 2's internal sync generator. Valid frequency range is 8 to 120 Hz.
This option is only valid if the Sync Input: Source is set to Sync In or USB Sync_. Controls synchronization delay in microseconds (us) between the chosen sync source signal and when the cameras are actually told to expose. This is a global system delay that is independent of, and in addition to, an individual camera's exposure delay setting. Valid range is 0 to 65862 us, and should not exceed one frame period of the external signal._
To setup the sync input signals, first define a input Source and configure desired trigger settings for the source:
Internal/Wired sets the OptiHub 2 as the sync source. This is the default sync configuration which uses the OptiSync protocol for synchronizing the cameras. The Parent OptiHub 2 will generate an internal sync signal which will be propagated to other (child) OptiHub 2(s) via the Hub Sync Out Jack and Hub Sync In Jack. For V100:R1(legacy) and the Slim 3U cameras, Wired Sync protocol is used. In this mode, the internal sync signal will still be generated but it will be routed directly to the cameras via daisy-chained sync cables.
Sync In sets an external device as the sync source.
This option is only valid if the Sync Input: Source is set to Internal Sync. Controls the frequency in Hertz (Hz) of the OptiHub 2's internal sync generator, and the this frequency will control the camera system frame rate. Valid frequency range is 8 to 120 Hz.
Detects and displays the frequency of the sync signal that's coming through the input port of the parent OptiHub 2, which is at the very top of the RCA sync chain. When sync source is set to Sync In, the camera system framerate will be synchronized to this input signal. Please note that OptiHub 2 is not designed for precise sync, so there may be slight sync discrepancies when synchronizing through OptiHub 2.
Manually adds global sync time offset to how camera system reacts to the received input signal. The input unit is measured in microseconds.
Can select from Either Edge, Rising Edge, Falling Edge, Low Gated, or High Gated signal from the connected input source.
Allows a triggering rate compatible with the camera frame rate to be derived from higher frequency input signals (e.g. 300Hz decimated down to 100Hz for use with a V100:R2 camera). Valid range is 1 (no decimation) to 15 (every 15th trigger signal generates a frame).
Detects and displays the frequency of the parent source.
Allows the user to allow or block trigger events generated by the internal sync control. This option has been deprecated for use in the GUI. Valid options are Gate-Open and Gate-Closed.
Allows a triggering rate compatible with the camera frame rate to be derived from higher frequency input signals (e.g. 360Hz decimated down to 120Hz for use with a Flex 13 camera). Valid range is 1 (no decimation) to 15 (every 15th trigger signal generates a frame).}}
Selects condition and timing for a pulse to be sent out over the External Sync Out jack. Available Types are: Exposure Time, Pass-Through, Recording Level, and Recording Pulse.
Polarity
Selects output polarity of External Sync Out signal. Valid options are: Normal and Inverted. Normal signals are low and pulse high and inverted signals are high and pulse low.
By modifying the device properties of the eSync, users can customize the sync configurations of the camera system for implementing various sync chain setups.
While the eSync is selected under the , use the to monitor the eSync properties. Here, users can configure the parent sync source of the camera system and also the output sync signals from the eSync for integrating child devices (e.g. ). For a specific explanation on steps for synchronizing external devices, read through the following page: .
Configure the input signal by first defining which input source to use. Available input sources include Internal Free Run, Internal Clock, SMPTE Timecode In, Video Gen Lock, Inputs (input ports), Isolated, VESA Stereo In, and Reserved. Respective input configurations appear on the pane when a source is selected. For each selected input source, the signal characteristics can be modified.
Synchronization Input Source Options
Controls the frequency of the eSync 2's internal sync generator when using the internal clock.
Introduces an offset delay, in microsecond, to selected trigger signal.
Sets the trigger mode. Available modes are Either Edge, Rising Edge, and Falling Edge, and each of them uses the corresponding characteristic of the input signal as a trigger.
Allows a triggering rate, compatible with the camera frame rate, to be derived from higher frequency input signals.
Allows a triggering rate, compatible with the camera frame rate, to be derived from lower frequency input signals. Available multiplier range: 1 to 15.
Displays the final rate of the camera system.
eSync2 ports vs eSync ports
In the eSync2, three general input ports are implemented in place of Lo-Z and Hi-Z input ports from the eSync. These general input ports are designed for high impedance devices, but low impedance devices can also be connected with appropriate adjustments. When the eSync 2 is connected to the system, options for Lo-Z and Hi-Z will be displayed.
Lo-Z input: Sets an external low impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Hi-Z input: Sets an external high impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Allows you to configure signal type and polarity of synchronization signal through the output ports, including the VESA stereo output port, on the eSync 2.
Type: Defines the output signal type of the eSync 2. Use this to sync external devices to the eSync 2.
Polarity: Change the polarity of the signal to normal or inverted. Normal signals constantly output a low signal and pulses high when triggering. Inverted signals constantly output a high signal and pulse low when triggering.
Output Signal Types
Trigger Source: Determines which trigger source is used to initiate the recording in Motive. Available options are Software, Isolated, and Inputs. When the trigger source set to software, recording is initiated in Motive.
With the eSync 2, external triggering devices (e.g. remote start/stop button) can integrate into the camera system and set to trigger the recording start and stop events in Motive. Such devices will connect to input ports of the eSync 2 and configured under the Record Triggering section of the eSync 2 properties.
By default, the remote trigger source is set to Software, which is the record start/stop button click events in Motive. Set the trigger source to the corresponding input port and select an appropriate trigger edge when an external trigger source (Trigger Source → isolated or input) is used. Available trigger options include Rising Edge, Falling Edge, High Gated, or Low Gated. The appropriate trigger option will depend on the signal morphology of the external trigger. After the trigger setting have been defined, press the recording button in advance. It sets Motive into a standby mode until the trigger signal is detected through the eSync. When the trigger signal is detected, Motive will start the actual recording. The recording will be stopped and return to the 'armed' state when the second trigger signal, or the falling edge of the gated signal, is detected.
Under the Record Triggering section, set the source to the respective input port where the trigger signal is inputted.
Choose an appropriate trigger option, depending on the morphology of the trigger signal.
Press the record button in Motive, which prepares Motive for recording. At this stage, Motive awaits for an incoming trigger signal.
When the first trigger is detected, Motive starts recording.
When the second trigger is detected, Motive stops recording and awaits for next trigger for repeated recordings. For High Gated and Low Gated trigger options, Motive will record during respective gated windows.
Once all the recording is finished, press the stop button to disarm Motive.
Input Monitor displays the corresponding signal input frequency. This feature is used to monitor the synchronization status of the signals into the eSync 2.
Displays the frequency of the Internal Clock in the eSync 2.
Displays the frequency of the timecode input.
Displays the frequency of the video genlock input.
Displays the frequency of the input signals into the eSync 2.
Displays the frequency of the external low impedance sync device.
Displays the frequency of the external high impedance sync device.
Display the frequency of the external generic sync device.
For internal use only.
Synchronization Input Source Options
Controls the frequency of the eSync 2's internal sync generator when using the internal clock.
Introduces an offset delay, in microsecond, to selected trigger signal.
Sets the trigger mode. Available modes are Either Edge, Rising Edge, and Falling Edge, and each of them uses the corresponding characteristic of the input signal as a trigger.
Allows a triggering rate, compatible with the camera frame rate, to be derived from higher frequency input signals.
Allows a triggering rate, compatible with the camera frame rate, to be derived from lower frequency input signals. Available multiplier range: 1 to 15.
Displays the final rate of the camera system.
eSync ports vs eSync2
In the eSync 2, three general input ports are implemented in place of Lo-Z and Hi-Z input ports from the eSync. These general input ports are designed for high impedance devices, but low impedance devices can also be connected with appropriate adjustments. When the eSync is connected to the system, options for Lo-Z and Hi-Z will be displayed.
Lo-Z input: Sets an external low impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Hi-Z input: Sets an external high impedance device as the trigger. The max signal voltage cannot exceed 5 Volts.
Allows you to configure signal type and polarity of synchronization signal through the output ports, including the VESA stereo output port, on the eSync2.
Defines the output signal type of the eSync2. Use this to sync external devices to the eSync2.
Polarity
Change the polarity of the signal to normal or inverted. Normal signals constantly output a low signal and pulses high when triggering. Inverted signals constantly output a high signal and pulse low when triggering.
Output Signal Types
Trigger Source: Determines which trigger source is used to initiate the recording in Motive. Available options are Software, Isolated, and Inputs. When the trigger source set to software, recording is initiated in Motive.
Input Monitor displays the corresponding signal input frequency. This feature is used to monitor the synchronization status of the signals into the eSync 2.
Internal Clock: Displays the frequency of the Internal Clock in the eSync 2.
SMTPE Time Code In: Displays the frequency of the timecode input.
Video Genlock In: Displays the frequency of the video genlock input.
Inputs: Displays the frequency of the input signals into the eSync 2.
Lo-Z: Displays the frequency of the external low impedance sync device.
Hi-Z: Displays the frequency of the external high impedance sync device.
Isolated: Display the frequency of the external generic sync device.
Reserved: For internal use only.
You can also export configured constraints, or import them, using the Constraints pane. To do this, simply click on the , and there will be options to export, import, and generate constraints.
When the cameras detect reflections in their view, it will be indicated with a warning sign to alert which cameras are seeing reflections; for Prime series cameras, the indicator LED ring will also light up in white.
Assets pane: While the markers are selected in Motive, click on the add button in the Assets pane.
In the Edit mode, when this option is enabled, Motive will access the recorded 2D data of a current Take. In this mode, Motive will be live-reconstructing from recorded 2D data and you will be able to inspect the reconstructions and marker rays from the view ports. For more information: .
The session folder can be opened or closed using the button at the bottom left corner.
In the list of session folders, a currently loaded session folder is noted with a flag symbol and a selected session folder will be highlighted in white. To add a new folder, click the button.
The star mark allows users to mark the best Takes. Simply click on the star icon and mark the successful Takes.
The health status column of the Takes indicates the user-selected status of each take:
: Excellent capture
: OK capture
: Poor capture
Indicates whether exists on the corresponding Take
Indicates whether the reconstructed exists on the corresponding Take.
If 3D data does not exist on a Take, it can be derived from 2D data by performing the reconstruction pipeline. See page for more details.
Indicates whether exist in the Take. Reference videos are recorded from cameras that are set to either MJPEG grayscale or raw grayscale modes.
Indicates whether any of the assets have baked into it.
Indicates whether synchronized audio data have been recorded with the Take. See:
Indicates whether analog data recorded using a data acquisition device exists in the Take. See: page.
Timecode stamped to the starting frame of the Take. This is available only if there was signal integrated to the system.
Take lists can be imported from a CSV file that contains take names on each row. To import, click on the top-right menu icon and select Import Shot List.
In the Data pane, context menu for captured Takes can be brought up by clicking on the icon or by right-clicking on a selected Take(s). The context menu lists out the options which can be used to perform corresponding pipelines on the selected Take(s). The menu contains a lot of essential pipelines such as reconstruction, auto-label, data export and many others. Available options are listed below.
Evaluating: Indicates that the feature is assessing the calibration quality.
Indicates that the calibration have been automatically updated to that . Updated mean error value will also be reported.
Multiplier applied to the camera system frame rate. This is available only for triggered sync and can also be configured from the . The resulting rate decides the sampling rate of the force plates.
Under the Wand Measurement section, it will indicate the wand that was used for the volume calibration and its expected length (theoretical value) depending on the type of wand that was used during the system .
Labeling pane includes a list of marker labels that are associated with the capture. The color of each label tells whether the marker is tracked in the current frame, and the corresponding gap percentage is indicated next to each label. When a marker set is chosen under the Marker Set dropdown menu, only associated labels will be listed. In addition, the marker set selection can also be linked to 3D selection in the perspective view pane when the Link to 3D button is enabled.
Average of values of all live-reconstructed 3D points. This is available only in the or in the .
With large camera systems, the Point Cloud engine may experience increased latency due to the amount of data it needs to handle in real-time. If the increased latency is causing frame drops or affecting the tracking quality, you can exclude selected cameras from contributing to the real-time reconstruction. In the , reveal the Reconstruction setting from the header context menu, and disable this setting for the cameras that you wish to process later. 2D frames captured by these cameras will be recorded in the TAK but they will not contribute to real-time reconstruction. This will reduce the amount of data to be processed in real-time, and you will still be able to utilize the 2D frames using post-processing reconstruction pipeline.
Marks the best take. Takes that are marked as best can also be accessed via scripts.
Shows mean offset value during calibration.
Displays percentile distribution of the errors.
The camera filter settings in the Take properties determine which IR lights from the recorded 2D camera data contributes to the when re-calulating the 3D data when needed.
For more information on these settings in Live mode, please refer to the page.
The Solver/Reconstruction settings under the Take properties are the 3D data solver parameters that were used to obtain the saved in the Take file. In Edit mode, you can change these parameters and perform the to obtain a new set of 3D data with the modified parameters.
For more information on these settings in Live mode, please refer to the page.
While the OptiHub is selected under the , use the to view and configure its properties. By doing so, users can set the parent sync source for the camera system, configure how the system reacts to input signals, and also which signals to output from the OptiHub for triggering other external acquisition devices.
USB Sync sets an external USB device as the sync source. This mode is for customers who use the development kits and would like to have their software trigger the cameras instead. Using the provided API, the OptiHub 2 will be send the trigger signal from the PC via the OptiHib 2's USB uplink connection to the PC.
The Internal/Wired input source uses the OptiHub 2's internal synchronization generator as the main sync source. You can modify the synchronization frequency for both protocol under the Synchronization Control section. When you adjust the system frame rate from this panel, the modified frame rate may not be reflected on the Devices pane. Check the streaming section of the status bar for the exact information.
The Sync In input source setting uses signals coming into the input ports of the OptiHub 2 to trigger the synchronization. Please refer to External page for more instructions on this.
(The camera system will be the child) sets an external USB device as the sync source. This mode is for customers who use the development kits and would like to have their software trigger the cameras instead. Using the provided API, the OptiHub 2 will be send the trigger signal from the PC via the OptiHib 2's USB uplink connection to the PC.
Sync signals can also be sent out through the output ports of the OptiHub 2 to child devices in the synchronization chain. Read more: .
Note: For capturing multiple recordings via recording trigger, only the first TAK will contain the 3D data. For the subsequent TAKs_, the 3D data must be reconstructed through the_ pipeline.
Open the and the to access the eSync 2 properties.
Either Edge
Uses either the rising or falling edge of the pulse signal.
Rising Edge
Uses the rising edge of the pulse signal.
Falling Edge
Uses the falling edge of the pulse signal.
High Gated
High Gated mode triggers when the input signal is at a high voltage level, but stops triggering at a low voltage level.
Low Gated
Low Gated mode triggers when the input signal is at a low voltage level, but stops triggering at a high voltage level.
Exposure Time
Outputs a pulse signal when the cameras expose.
Pass-Through
Passes the input signal to the output.
Recording Gate
Outputs a constant high level signal while recording. Other times the signal is low. (Referred as Recording Level in older versions).
Gated Exposure Time
Outputs a pulse signal when the cameras expose during a recording only. (Referred as Recording Pulse in older versions).
Internal Free Run
This is the default synchronization protocol for Ethernet camera systems without an eSync2. In this mode, Prime series cameras are synchronized by communicating the time information with each other through the camera network itself using a high-precision algorithm for timing synchronization.
Internal Clock
Sets the eSync 2 to use its internal clock to deliver the sync signal to the Ethernet cameras, and the sync signal can be modified as well.
SMPTE Timecode In
Sets a timecode sync signal from an external device as the input source signal.
Video Gen Lock
Locks the camera sync to an external video sync signal.
Isolated
Used for generic sync devices connected to the Isolated Sync In port from the eSync 2. Considered safer than other general input ports (Hi-Z and Lo-Z). The max signal voltage cannot exceed 12 Volts.
Inputs
Uses signals through the input ports of the eSync 2. Used for high impedance output devices. The max signal voltage cannot exceed 5 Volts.
VESA Stereo In
Sets cameras to sync to signal from the VESA Stereo input port.
Reserved
Internal use only.
Exposure Time
Outputs a pulse signal when the cameras expose.
Recording Gate
Outputs a constant high level signal while recording. Other times the signal is low.
Record Start/Stop Pulse
Outputs a pulse signal both when the system starts and stops recording.
Gated Exposure Time
Outputs a pulse signal when the cameras expose, when the system is recording.
Gated Internal Clock
Outputs the internal clock, while the system is recording.
Selected Sync
Outputs the Sync Input signal without factoring in signal modifications (e.g. input dividers).
Adjusted Sync
Outputs the Sync Input signal accounting for adjustments made to the signal.
Internal Clock
SMPTE Timecode In
Video Genlock In
Isolated
Inputs
VESA Stereo In
Reserved
Uses a selected input signal to generate the synchronization output signal.
Internal Free Run
This is the default synchronization protocol for Ethernet camera systems without an eSync 2. In this mode, Prime series cameras are synchronized by communicating the time information with each other through the camera network itself using a high-precision algorithm for timing synchronization.
Internal Clock
Sets the eSync 2 to use its internal clock to deliver the sync signal to the Ethernet cameras, and the sync signal can be modified as well.
SMPTE Timecode In
Sets a timecode sync signal from an external device as the input source signal.
Video Gen Lock
Locks the camera sync to an external video sync signal.
Isolated
Used for generic sync devices connected to the Isolated Sync In port from the eSync 2. Considered safer than other general input ports (Hi-Z and Lo-Z). The max signal voltage cannot exceed 12 Volts.
Inputs
Uses signals through the input ports of the eSync2. Used for high impedance output devices. The max signal voltage cannot exceed 5 Volts.
VESA Stereo In
Sets cameras to sync to signal from the VESA Stereo input port.
Reserved
Internal use only.
Exposure Time
Outputs a pulse signal when the cameras expose.
Recording Gate
Outputs a constant high level signal while recording. Other times the signal is low.
Record Start/Stop Pulse
Outputs a pulse signal both when the system starts and stops recording.
Gated Exposure Time
Outputs a pulse signal when the cameras expose, when the system is recording.
Gated Internal Clock
Outputs the internal clock, while the system is recording.
Selected Sync
Outputs the Sync Input signal without factoring in signal modifications (e.g. input dividers).
Adjusted Sync
Outputs the Sync Input signal accounting for adjustments made to the signal.
Internal Clock
SMPTE Timecode In
Video Genlock In
Isolated
Inputs
VESA Stereo In
Reserved
Uses a selected input signal to generate the synchronization output signal.
Selection Mode
Options for switching between select mode and QuickLabel mode. Select mode is used for normal operations, and QuickLabel mode allows assigning each selected label with just one-click.
Increment Options
Options for selection increment behavior when labeling:
Do not increment: Selection stays the same after labeling
Go to next label: Selection advances to the next label on the list
Go to next unlabeled marker: Selection advances to the next unlabeled marker on the list.
Unlabeled Selected
Unlabels selected trajectories.
Label List Options
Splits the list of labels into two columns for organization purposes. Unlabeled trajectories will be sorted on the right column, and the selected marker set labels are sorted on the left column.
Link to 3D Selection
When this button is enabled, marker label selection will be linked to the selection from the Perspective viewport.
Show Range Settings
When enabled, shows the range settings to determine which frames of the recorded data the label will get applied to.
When an NI-DAQ device is selected in Motive, its device information gets listed under the Properties pane. Just basic information on the used device will be shown in the Properties pane. For configuring properties of the device, use the Devices pane.
For more information, read through the NI-DAQ setup page: NI-DAQ Setup.
Advanced Settings
The Properties: NI-DAQ contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Only enabled NI-DAQ devics will be actively measuring analog signals.
This setting determines how the recording of the selected NI-DAQ device will be triggered. This must be set to None for reference clock sync and to Device for recording trigger sync.
None: NI-DAQ recording is triggered when Motive starts capturing data. This is used when using the reference clock signal for synchronization.
Device: NI-DAQ recording is triggered when a recording trigger signal to indicate the record start frame is received through the connected input terminal.
(available only when Trigger Sync is set to Device) Name of the NI-DAQ analog I/O terminal where the recording trigger signal is inputted to.
This setting sets whether an external clock signal is used as the sync reference. For precise synchronization using the internal clock signal sync, set this to true.
True: Setting this to true will configure the selected NI-DAQ device to synchronize with an inputted external sample clock signal. The NI-DAQ must be connected to an external clock output of the eSync on one of its digital input terminals. The acquisition rate will be disabled since the rate is configured to be controlled by the external clock signal.
False: NI-DAQ board will collect samples in 'Free Run' mode at the assigned Acquisition Rate.
(available only when Reference Clock Sync is set to True) Name of the NI-DAQ digital I/O terminal that the external clock (TTL) signal is inputted to.
Set this to the output port of the eSync where it sends out the internal clock signal to the NI-DAQ.
Shows the acquisition rate of the selected NI-DAQ device(s).
Depending on the model, NI-DAQ devices may have different sets of allowable input types and voltage ranges for their analog channels. Refer to your NI-DAQ device User's Guide for detailed information about supported signal types and voltage ranges.
(Default: -10 volts) Configure the terminal's minimum voltage range.
(Default: +10 volts) Configure the terminal's maximum voltage range.
Configures the measurement mode of the selected terminal. In general, analog input channels with screw terminals use the single-ended measurement system (RSE), and analog input channels with BNC terminals use the differential (Diff) measurement system. For more information on these terminal types, refer to NI documentation.
Terminal: RSE Referenced single ended. Measurement with respect to ground (e.g. AI_GND) (Default)
Terminal: NRSE NonReferenced single ended. Measurement with respect to single analog input (e.g. AISENSE)
Terminal: Diff Differential. Measurement between two inputs (e.g. AI0+, AI0-)
Terminal: PseudoDiff Differential. Measurement between two inputs and impeded common ground.
[Advanced] Name of the selected device.
Device model ID, if available.
Device serial number of the selected NI-DAQ assigned by the manufacturer.
Type of device.
Total number of available channels on the selected NI-DAQ device.
_[Advanced]_What mode of Motive playback being used.
Whether the device is ready or not.
Tristate status of either Need Sync, Ready for Sync, or Synced. Updates the "State" icon in the Devices pane.
[Advanced] Internal device number.
User editable name of the device.
Properties of individual channels can be configured directly from the Devices pane. As shown in the image, you can click on the icon to bring up the settings and make changes.
This page is for the general specifications of the Prime Color camera. For details on how to setup and use the Prime color, please refer to the Prime Color Setup page in this wiki.
Before diving into specific details, let’s begin with a brief overview of Motive. If you are new to using Motive, we recommend you to read through this page and learn about the basic tools, configurations and navigation controls, as well as instructions on managing capture files.
In Motive, the recorded mocap data is stored in a file format called Take (TAK), and multiple Take files can be grouped within a session folder. The Data pane is the primary interface for managing capture files in Motive. This pane can be accessed from the icon on the main Toolbar, and it contains a list of session folders and the corresponding Take files that are recorded or loaded in Motive.
Motive will save and load Motive-specific file formats including the Take files (TAK), camera calibration files (CAL), and Motive user profiles (MOTIVE) that can contain most of the software settings as well as asset definitions for Skeletons and Rigid Body objects. Asset definitions are related to trackable objects in Motive which will be explained further in the Rigid Body Tracking and Skeleton Tracking page.
Motive file management is centered on the Take (TAK) file. A TAK file is a single motion capture recording (aka 'take' or 'trial'), which contains all the information necessary to recreate the entire capture from the file, including camera calibration, camera 2D data, reconstructed and labeled 3D data, data edits, solved joint angle data, tracking models (Skeletons, Rigid Bodies), and any additional device data (audio, force plate, etc.). A Motive Take (TAK) file is a completely self-contained motion capture recording, and it can be opened by another copy of Motive on another system.
Take files are forward compatible, but not backwards compatible. Meaning, if you record in Motive 3.x and try and play it back in Motive 2.x, Motive will throw an error. You can, however, record a Motive 2.x take and play it back in Motive 3.x.
If you have any old recordings from Motive 1.7 or below, with BAK file extension, please import these recordings into Motive 2.0 version first and re-save them into TAK file format in order to use it in Motive version 3.0 or above.
A Session is a file folder that allows the user to organize multiple similar takes (e.g. Monday, Tuesday, Wednesday, or StaticTrials, WalkingTrials, RunningTrials, etc). Whether you are planning the day's shoot or incorporating a group of Takes mid-project, creating session folders can help manage complex sets of data. In the Data pane, you can import session folders that contain multiple Takes or create a new folder to start a new capture session. For a most efficient workflow, plan the mocap session before the capture and organize a list of captures (shots) that need to be completed. Type Take names in a spreadsheet or a text file, and copy and paste the list, which will automatically create empty Takes (shot list) with corresponding names from the pasted list.
Click the button on the toolbar at the bottom of the Data pane to hide or expand the list of open Session Folders.
The active Session Folder is noted with a flag icon. To switch to a different folder, left-click on the folder name in the Session list.
Please refer to the Session Folders section of the Data pane page for more information on working with these folders.
Software configurations are saved onto the motive profile (*.motive) files. In the motive profile, all of the application-related configurations, lists of assets, and the loaded session folders are saved and preserved. You can export and import the profiles to easily maintain the same software configurations each time Motive is launched.
All of the currently configured software settings will get saved onto the C:\ProgramData\OptiTrack\MotiveProfile.motive
file periodically throughout capture and when closing out of Motive. This file is the default application profile, and it gets loaded back when Motive is launched again. This allows all of the configurations to be persisted in between different sessions of Motive. If you wish to revert all of the settings to its factory default, use the Reset Application Settings button under the Edit tab of the main command bar.
Motive profiles can also be exported and imported from the File menu of the main command bar. Using the profiles, you can easily transfer and persist Motive configurations among different instances and different computers.
The followings are saved on application profile:
Application Settings
Live Pipeline Settings
Streaming Settings
Synchronization Settings
Export Settings
Rigid Body & Skeleton assets
Rigid Body & Skeleton settings
Labeling settings
Hotkey configurations
A calibration file is a standalone file that contains all of the required information to completely restore a calibrated camera volume, including positions and orientations of each camera, lens distortion parameters, and the camera settings. After a camera system is calibrated, CAL file can be exported and imported back again onto Motive when needed. Thus, it is recommended to save out the camera calibration file after each round of calibration.
Please note that reconstruction settings also get stored in the calibration file; just like how it gets stored in the MOTIVE profile. If the calibration file is imported after the profile file was loaded, it may overwrite the previous reconstruction settings as it gets imported.
Note that this file is reliable only if the camera setup has remained unchanged since the calibration. Read more from Calibration page.
The followings are saved on application profile:
Reconstruction settings
Camera settings
Position and orientation of the cameras
Location of the global origin
Lens distortion of each camera
Default System Calibration
The default system calibration gets saved onto the C:\ProgramData\OptiTrack\Motive\System Calibration.cal
file, and it gets loaded automatically at application startup to provide instant access to the 3D volume. This file also gets updated each time calibration is modified or when closing out of Motive.
In Motive, the main viewport is fixed at the center of the UI and is used for monitoring the 2D or 3D capture data in both live capture and playback of recorded data. The viewport can be set to either perspective view or camera view. The Perspective View mode shows the reconstructed 3D data within the calibrated 3D space, and the Camera View mode shows 2D images from each camera in the setup. These modes can be selected from the drop-down menu at the top-right corner, and both of these views are essential for assessing and monitoring the tracking data.
Use the dropdown menu at the top-left corner to switch into the Perspective View mode. You can also use the number 1 hotkey while on a viewport.
Used to look through the reconstructed 3D representation of the capture, analyze marker positions, rays used in reconstruction, etc.
The context menu in the Perspective View allows you to access more options related to the markers and assets in 3D tracking data.
Use the dropdown menu at the top-left corner to switch into the Camera View mode. You can also use the number 2 hotkey while on a viewport.
Each camera’s view can be accessed from the Camera Preview pane. It displays the images that are being transmitted from each camera. The image processing modes are displayed, including grayscale and object.
Detected IR lights and/or reflections are also shown in this pane. Only the IR lights that satisfy the object filters get considered as markers.
From the Camera Preview pane, you can mask certain pixel regions to exclude them from the process.
When needed, the viewport can be split into 4 different smaller views. This can be selected from the menu at the top-right corner of the viewport. You can use the hotkeys (Shift + 4) to do this also.
When needed, an additional Viewer pane can be opened under the View tab or by clicking the icon on the main toolbar.
Most of the navigation controls in Motive are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Mouse controls in Motive can be customized from the application settings panel to match your preference. Motive also includes a variety of common mouse control presets so that any new users can easily start controlling Motive. Available preset control profiles include Motive, Blade, Maya, and Visual3D. The following table shows a few basics actions that are commonly used for navigating the viewports in Motive.
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle Selection in View
CTRL + left mouse click
Using the Hotkeys can speed up workflows. Most of the default hotkeys are listed on the Motive Hotkeys page. When needed, the hotkeys can also be customized from the application settings panel which can be accessed under the Edit tab. Various actions can be assigned with a custom hotkey using the Hotkey Editor.
The Control Deck is always docked at the bottom of Motive, and it provides both recording and navigation controls over Motive's two primary operating modes: Live mode and Edit mode.
Switching to Live Mode in Motive using the control deck.
In the Live Mode, all cameras are active and the system is processing camera data. If the mocap system is already calibrated, Motive is live-reconstructing 2D camera data into labeled and unlabeled 3D trajectories (markers) in real-time. The live tracking data can be streamed to other applications using the data streaming tools or the NatNet SDK. Also, in Live mode, the system is ready for recording and corresponding capture controls will be available in the Control Deck.
In the Edit Mode, the cameras are not active, and Motive is processing loaded Take file (pre-recorded data). The playback controls will be available in the control deck, and the small timeline will appear at the top of the control deck for scrubbing through the recorded frames. In this mode, you can review the recorded 3D data from the TAK and make post-processing edits and/or manually assign marker labels to the recorded trajectories before exporting out the tracking data. Also, when needed, you can switch to the 2D mode, and view the real-time reconstructed 3D data to understand how the 3D data was obtained and perform post-processing reconstruction pipeline to re-obtain a new set of 3D data.
Hotkeys: "Shift + ~" is the default hotkey for toggling between Live and Edit modes in Motive.
The Graph View pane is used for plotting live or recorded channel data in Motive. For example, 3D coordinates of the reconstructed markers, 3D positions and orientations of Rigid Body assets, force plate data, analog data from data acquisition devices, and more can be plotted on this pane. You can switch between existing layouts or create a custom layout for plotting specific channel data.
Basic navigation controls are highlighted below. For more information, read through the Graph View pane page.
Navigate Frames (Alt + Left-click + Drag)
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Panning (Scroll-click + Drag)
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Zooming (Right-click + Drag)
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
Selecting Frame Range (Left-click + Drag)
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Navigate Frames (Left-click)
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Pan View Range
Scroll-click and drag to pan the view range range.
Frame Range Zoom
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
Working Range / Playback range
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range, recorded tracking data will be played back and shown on the graphs. This range can also be used to output a specific frame ranges when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Selection Range
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both Graph View pane as well as Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the amount of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
This pane is used for configuring application-wide settings, which include startup configurations, display options for both 2D and 3D viewports, settings for asset creation, and most importantly, live-pipeline parameters for the Solver and the 2D Filter settings for the cameras. The Cameras tab includes the 2D filter settings that basically determine which reflections gets considered as marker reflections on the camera views, and the Solver setting determines which 3D markers get reconstructed in the scene from a group of marker reflections from all of the cameras. References for the available settings are documented in the Application Settings page.
If you wish to reset the default application setting, go to Reset Application Settings under the Edit tab.
Solver Settings
Under the Solver tab, you can configure a real-time solver engine. These settings, including the trajectorizer settings, are one of the most important settings in Motive. These settings determine how 3D coordinates are acquired from the captured 2D camera images and how they are used for tracking Rigid Bodies and Skeletons. Thus, understanding these settings is very important for optimizing the system for the best tracking results.
Camera Settings
Under the Camera tab, you can configure the 2D Camera filter settings (circularity filter and size filter) as well as other display options for the cameras. The 2D Camera filter setting is one of the key settings for optimizing the capture. For most applications, the default settings work well, but it is still beneficial to understand some of the core settings in order for more efficient control over the camera system.
For more information, read through the Application Settings: Live Pipeline page and the Reconstruction and 2D Mode
The UI layout in Motive is customizable. All panes can be docked and undocked from the UI. Each pane can be positioned and organized by drag-and-drop using the on-screen docking indicators. Panes may float, dock, or stack. When stacked together, they form a tabbed window for quickly cycling through. Layouts in Motive can be saved and loaded, allowing a user to switch quickly between default and custom configurations suitable for different needs. Motive has preset layouts for Calibration, Creating a Skeleton, Capturing (Record), and Editing workflows. Custom layouts can be created, saved, and set as default from the Main Menu -> 'Layout' menu item. Quickly restore a particular layout from the Layout menu, the Layout Dropdown at the top right of the Main Menu, or via HotKeys.
Note: Layout configurations from Motive versions older than 2.0 cannot be loaded in latest versions of Motive. Please re-create and update the layouts for use.
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.
The Application Settings can be accessed under the Edit tab or by clicking the icon on the main toolbar.
When a camera, or a camera group, is selected from the Devices pane, related camera settings will be displayed in the Properties pane. From the Properties pane, you can configure the camera settings so that it is optimized for your capture application. You can enable/disable IR LEDs, change exposure length of the cameras, set the video mode, apply gain to the capture frames, and more. This page lists out properties of the cameras and what they are used for.
Advanced Settings
The Properties: Camera contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Enables/disables selected cameras. When cameras are disabled, they don't record any data nor contribute to the reconstruction of 3d data.
Shows the frame rate of the camera. The camera frame rate can only be changed within the devices pane.
This setting determines whether or not selected cameras contribute to the real-time reconstruction.
Shows the rate multiplier or divider applied to the master frame rate. The master frame rate depends on the sync configuration.
Sets the amount of time that the camera exposes per frame. The minimum and maximum values will depend on both the type of camera and the frame rate. Higher exposure will allow more light in, creating a brighter image that can increase visibility for small and dim markers. However, setting exposure too high can introduce false markers, larger marker blooms, and marker blurring--all of which can negatively impact marker data quality. Exposure value is measured in scanlines for tracking bars and Flex3 series cameras, and in microseconds for Flex13, S250e, Slim13E, and Prime Series cameras.
Defines the minimum brightness for a pixel to be seen by a camera, with all pixels below the threshold being ignored. Increasing the threshold can help filter interference by non-markers (e.g. reflections and external light sources), while lowering the threshold can allow dimmer markers to be seen by the system (e.g. smaller markers at longer distances from the camera).
[Advanced] When calibrating multi-room spaces, you can partition select cameras to allow for Continuous Calibration to collect samples from each room and calibrate even when there is no camera overlap between spaces. This creates the ability to have several capture volumes tied to a single system while maintaining continuously calibrated cameras for each space.
This setting enables or disables the IR LED ring on selected cameras. For tracking passive retro-reflective markers, this setting must be set to true to illuminate the IR LED rings for tracking. If the IR illumination is too bright for the capture, you can decrease the camera exposure setting to decrease the amount of light received by the imager; dimming the overall captured frames.
Sets the video type of the selected camera.
Sets the camera to view either visible or IR spectrum on cameras equipped with a Filter Switcher. When enabled, the camera captures in IR spectrum, and when disabled, the camera captures in visible spectrum.Infrared Spectrum should be selected when the camera is being used for marker tracking applications. Visible Spectrum can optionally be selected for full frame video applications, where external, visible spectrum lighting will be used to illuminate the environment instead of the camera’s IR LEDs. Common applications include reference video and external calibration methods that use images projected in the visible spectrum.
Sets the imager gain level for the selected cameras. Gain settings can be adjusted to amplify or diminish the brightness of the image. This setting can be beneficial when tracking at long ranges. However, note that increasing the gain level will also increase the noise in the image data and may introduce false reconstructions. Thus, before deciding to change the gain level, adjust the camera settings first to optimize the image clarity.
[Advanced] This property indicates whether the selected camera has been calibrated or not. This is just an indication of whether the camera has been processed through the calibration wanding, but it does not validate the quality of the camera calibration.
Basic information about the selected camera gets listed in the Details section
Displays the camera number assigned by Motive.
Displays the model of a selected camera.
Displays the serial nubmer of a selected camera.
Displays focal length of the lens on the selected camera.
When this is enabled, the estimated field of view (FOV) of the selected camera will be shown in the perspective viewport.
Show of hide frame delivery information from the selected camera. The frame delivery information is used for diagnosing how fast each camera is delivering its frame packets. When enabled, the frame delivery information will be shown in the camera views.
Show or hide the guide reticle when using the Aim Assist button for aiming the cameras.
Prime color cameras also have the following properties that can be configured:
Default: 1920, 1080
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum allowable frame rate will vary. Below is the maximum allowed frame rates for each respective resolution setting.
960 x 540 (540p)
500 FPS
1280 x 720 (720p)
360 FPS
1920 x 1080 (1080p)
250 FPS
Default: Constant Bit Rate.
This property determines how much the captured images will be compressed. The Constant Bit-Rate mode is used by default and recommended because it is easier to control the data transfer rate and efficiently utilize the available network bandwidth.
Constant Bit-Rate
In the Constant Bit-Rate mode, Prime Color cameras vary the degree of image compression to match the data transmission rate given under the Bit Rate settings. At a higher bit-rate setting, the captured image will be compressed less. At a lower bit-rate setting, the captured image will be compressed more to meet the given data transfer rate, but compression artifacts may be introduced if it is set too low.
Variable Bit-Rate
Variable Bit-Rate setting is also available for keeping the amount of the compression constant and allowing the data transfer rate to vary. This mode can be beneficial when capturing images with objects that have detailed textures because it keeps the amount of compression same on all frames. However, this may introduce dropped frames whenever the camera tries to compress highly detailed images because it will increase the data transfer rate; which may overflow the network bandwidth as a result. For this reason, we recommend using the Constant Bit-Rate setting in most applications.
Default: 50
Available only while using Constant Bit-rate Mode
Bit-rate setting determines the transmission rate outputted from the selected color camera. The value given under this setting is measured in percentage (100%) of the maximum data transmission speed, and each color camera can output up to ~100 MBps. In other words, the configured value will indirectly represent the transmission rate in Megabytes per second (MBps). At bit-rate setting of 100, the camera will capture the best quality image, however, it could overload the network if there is not enough bandwidth to handle the transmitted data.
Since the bit-rate controls the amount of data outputted from each color camera, this is one of the most important settings when properly configuring the system. If your system is experiencing 2D frame drops, it means one of the system requirements is not met; either network bandwidth, CPU processing, or RAM/disk memory. In such cases, you could decrease the bit-rate setting and reduce the amount of data output from the color cameras.
Image Quality
The image quality will increase at a higher bit-rate setting because it records a larger amount of data, but this will result in large file sizes and possible frame drops due to data bandwidth bottleneck. Often, the desired result is different depending on the capture application and what it is used for. The below graph illustrates how the image quality varies depending on the camera framerate and bit-rate settings.
Tip: Monitoring data output from each camera
Default : 24
Gamma correction is a non-linear amplification of the output image. The gamma setting will adjust the brightness of dark pixels, mid-tone pixels, and bright pixels differently, affecting both brightness and contrast of the image. Depending on the capture environment, especially with a dark background, you may need to adjust the gamma setting to get best quality images.
Data output from the entire camera system can be monitored through the Status Panel. Output from individual cameras can be monitored from the 2D Camera Preview pane when the Camera Info is enabled under the visual aids () option.
Rigid body properties determine how the corresponding Rigid Body asset is tracked and displayed in the viewport.
To view related properties, select a Rigid Body asset in the Assets pane or in the 3D viewport, and the corresponding properties will be listed under the Properties pane. These properties can be modified both in Live and Edit mode. Default creation properties are listed under the Application Settings.
Advanced Settings
The Properties: Rigid Body contains advanced settings that are hidden by default. Access these settings by going to the menu on the top-right corner of the pane and clicking Show Advanced and all of the settings, including the advanced settings, will be listed under the pane.
The list of advanced settings can also be customized to show only the settings that are needed specifically for your capture application. To do so, go the pane menu and click Edit Advanced, and uncheck the settings that you wish to be listed in the pane by default. One all desired settings are unchecked, click Done Editing to apply the customized configurations.
Allows a custom name to be assigned to the Rigid Body. Default is "Rigid Body X" where x is the Rigid Body ID.
Enables/Disables tracking of the selected Rigid Body. Disabled Rigid Bodies will not be tracked, and its data will not be included in the exported or streamed tracking data.
User definable ID for the selected Rigid Body. When working with capture data in the external pipeline, this value can be used to address specific Rigid Bodies in the scene.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to be booted or first tracked.
The minimum number of markers that must be tracked and labeled in order for a Rigid Body asset, or each Skeleton bone, to continue to be tracked after the initial boot.
_[Advanced]_The order of the Euler axis used for calculating the orientation of the Rigid Body and Skeleton bones. Motive computes orientations in Quaternion and converts them into an Euler representation as needed. For exporting specific Euler angles, it's recommended to configure it from the Exporter settings, or for streaming, convert Quaternion into Euler angles on the client-side.
Selects whether or not to display the Rigid Body name in the 3D Perspective View. If selected, a small label in the same color as the Rigid Body will appear over the centroid in the 3D Perspective View.
Show the corresponding Rigid Body in the 3D viewport when it is tracked by the camera system.
Color of the selected Rigid Body in the 3D Perspective View. Clicking on the box will bring up the color picker for selecting the color.
For Rigid Bodies, this property shows, or hides, visuals of the Rigid Body pivot point.
[Advanced] Enables the display of a Rigid Body's local coordinate axes. This option can be useful in visualizing the orientation of the Rigid Body, and for setting orientation offsets.
Shows a history of the Rigid Body’s position. When enabled, you can set the history length and the tracking history will be drawn in the Perspective view.
Shows the Marker Constraints as transparent spheres on the Rigid Body. Asset mode markers are the expected marker locations according to the Rigid Body solve.
Draws lines between labeled Rigid Body or Skeleton markers and corresponding expected marker locations. This helps to visualize the offset distance between actual marker locations and the Marker Constraints.
[Advanced] When enabled, all markers that are part of the Rigid Body definition will be dimmed, but still visible, when not present in the point cloud.
When a valid geometric model is loaded in the Attached Geometry section, the model will be displayed instead of a Rigid Body when this entry is set to true.
Attached Geometry setting will be visible if the Replace Geometry setting is enabled. Here, you can load an OBJ file to replace the Rigid Body. Scale, positions, and orientations of the attached geometry can be configured under the following section also. When a OBJ file is loaded, properties configured in the corresponding MTL files alongside the OBJ file will be loaded as well.
Attached Geometry Settings
When the Attached Geometry is enabled, you can attach a 3D model to a Rigid Body and the following setting will be available also.
Pivot Scale: Adjusts the size of the Rigid Body pivot point.
Scale: Rescales the size of attached object.
Yaw (Y): Rotates the attached object in respect to the Y-axis of the Rigid Body coordinate axis.
Pitch (X): Rotates the attached object in respect to the X-axis of the Rigid Body coordinate axis.
Roll (Z): Rotates the attached object in respect to the Z-axis of the Rigid Body coordinate axis.
X: Translate the position of attached object in x-axis in respect to the Rigid Body coordinate.
Y: Translate the position of attached object in y-axis in respect to the Rigid Body coordinate.
Z: Translate the position of attached object in z-axis in respect to the Rigid Body coordinate.
Opacity: Sets the opacity of an attached object. An OBJ file typically comes with a corresponding MTL file which defines its properties, and the transparency of the object is defined within these MTL files. The Opacity value under the Rigid Body properties applies a factor between 0 ~ 1 in order to rescale the loaded property. In other words, you can set the transparency in the MTL file and rescale them using the Opacity property in Motive.
If you are exporting an OBJ file from Maya, you will need to make sure the Ambient Color setting is set to white upon export. If this color is set to black, it will result in removing textures when a Rigid Body is deselected.
IMU feature is not fully supported in Motive 3.x. Please use Motive 2.3 for using IMU active components.
Uplink ID assigned to the Tag or Puck using the Active Batch Programmer. This ID must match with the Uplink ID assigned to the Active Tag or Puck that was used to create the Rigid Body.
Radio frequency communication channel configured on the Active Tag, or Puck, that was used to define the corresponding Rigid Body. This must match the RF channel configured on the active component; otherwise, IMU data will not be received.
Applies double exponential smoothing to translation and rotation of the Rigid Body. Increasing this setting may help smooth out noise in the Rigid Body tracking, but excessive smoothing can introduce latency. Default is 0 (disabled).
Compensate for system latency when tracking of the corresponding Rigid Body by predicting its movement into the future. Please note that predicting further into the future may impact the tracking stability.
[Advanced] When needed, you can damp down translational and/or rotational tracking of a Rigid Body or a Skeleton bone on selected axis.
The Graph View pane is used to visualize the tracking data in Motive. This pane can be accessed from the command bar (View tab > Graph) or simply by clicking on the icon. This page provides instructions and tips on how to efficiently utilize the Graph View pane in Motive.
Using the Graph View pane, you can visualize and monitor multiple data channels including 3D positions of reconstructed markers, 6 Degrees of Freedom (6 DoF) data of trackable assets, and signals from integrated external devices (e.g. force plates or NI-DAQ). Graph View pane offers a variety of graph layouts for the most effective data visualization. In addition to the basic layouts (channel, combined, gapped), custom layouts can also be created for monitoring specific data channels only. Up to 9 graphs can be plotted in each layout and up to two panes can be opened simultaneously in Motive.
Graphs can be plotted in both Live and Edit mode.
In Live Mode, the following data can be plotted in real-time:
Rigid body 6 DoF data (Position and Orientation)
Force Plate Data (Force and Moment)
Analog Data
In Edit Mode, the graphs can be used to review and post-process the captured data:
3D Positions of reconstructed markers
Rigid body 6 DoF data (Position and Orientation)
Force Plate Data (Force and Moment)
Analog Data
Graph Editor
This opens up the sidebar for customizing a selected graph within a layout.
Autoscale Graph
Toggle to autoscale X/Y/Z graphs.
Zoom Fit
(selected range)
Zooms into selected frame region and centers the timeline accordingly.
Lock Cursor Centered
Locks the timeline scrubber at the center of the view range.
Delete Selected Keys
Delete selected frame region.
Move Selected Keys
Translates trajectories in selected frame region. Select a range and drag up and down on a trajectory.
Draw Keys
Manual draw trajectory by clicking and dragging on a selected trajectory in the Editor.
Merge Keys Up
Merge Keys Down
Lock Selection
Locks the current selection (marker, Rigid Body, Skeleton, force plates, or NI-DAQ) onto all graphs on the layout. This is used to temporarily hold the selections. Locked selections can later be fixed by taking a snapshot of the layout. This is elaborated more in the later section.
Creates a new graph layout.
Deletes the current graph layout.
Saves the changes to the graph layout XML file.
Takes an XML snapshot of the current graph layout. Once a layout has been particularized, both the layout configuration and the item selection will be fixed and it can be exported and imported onto different sessions.
Opens the layout XML file of the current graph layout for editing.
Opens the file location of where the XML files for the graph layouts are stored.
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Scroll-click and drag to pan the view range range.
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range will recorded tracking data be played back and shown on the graphs. This range can also be used to output a specific frame range when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both the Graph View pane as well as the Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the number of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
Left-click and drag on the graph to select a specific frame range. Frame range selection can be utilized for the following workflows:
Zooming: Zoom quickly into the selected range by clicking on the button or by using the F hotkey.
Tracking Data Export: Exporting tracking data for selected frame ranges.
Reconstruction: Performing the post-processing reconstruction (Reconstructing / Reconstruct and Auto-labeling) pipeline on selected frame ranges.
Labeling: Assigning marker labels, modifying marker labels, or running the auto-label pipeline on selected ranges only.
Post-processing data editing: Applying the editing tools on selected frame ranges only. Read more: Data Editing
Data Deleting: Deleting 3D data or marker labels on selected ranges.
The layouts feature in the Graphs View pane allows users to organize and format graph(s) to their preference. The graph layout is selected under the drop-down menu located at the top right corner of the Graph View pane.
In addition to default graph layouts (channels view, combined view, and tracks view) which have been migrated from the previous versions of Motive, custom layouts can also be created. With custom layouts, users can specify which data channels to plot on each graph, and up to 9 graphs can be configured on each layout. Furthermore, asset selections can be locked to labeled markers or assets.
Layouts under the System Layouts category are the same graphs that existed in the old timeline editor.
The Channel View provides X/Y/Z curves for each selected marker, providing verbose motion data that highlights gaps, spikes, or other types of noise in the data.
The Combined View provides X/Y/Z curves for each selected markers at same plot. This mode is useful for monitoring positions changes without having to translate or rescale the y-axis of the graph.
The Tracks View is a simplified view that can reveal gaps, marker swaps, and other basic labeling issues that can be quickly remedied by merging multiple marker trajectories together. You can select a specific group of markers from the drop down menu. When two markers are selected, labels can be merged by using the Merge Keys Up and Merge Keys Down buttons.
In the new Graphs View pane, the graph layout can be customized to monitor data from channels involved in a capture. Create a new layout from the menu > Create New Layout option or right-click on the pane and click Create New Layout option.
Graph layout customization is further explained on the later section: Customizing Layout.
New layouts can be created by clicking on the Create Graph Layout from the pane menu located on the top-right corner.
Right-click on the graph, go to the Grid Layout, and choose the number of rows and columns that you wish to put in the grid. (max 9 x 9)
Expand the Graph Editor by clicking on the icon on the tool bar.
Click on a graph from the grid. The graph will be highlighted in yellow. Within the grid, only the selected graph will be edited when making changes using the Graph Editor.
Next, you need to pick data channels that you wish to plot. You can do this by checking the desired channels under the data tab while a graph is selected. Only the checked channels will be plotted on the selected graph. Here, you can also specify which color to use when plotting corresponding data channels.
Then under the Visual tab, format the style of the graph. You can configure the graph axis, assign name for the graph, display values, and etc. Most importantly, configure the View Style to match desired graph format.
When plotting live tracking data in the Live Mode, set the View Style to Live. Frame range of the Live mode graphs can be adjusted by changing the scope duration under application settings.
Repeat the above steps 5 ~ 6 and configure each of the graphs in the layout.
Select an asset (marker, Rigid Body, Skeleton, force plate, or NI-DAQ channel) that you wish to monitor.
Lock selection for graphs that need to be linked to the selection. Individual graphs can be locked from the context menu (right-clicking on the graph > Lock Selection) or all graphs can be locked by clicking on the toolbar.
Once all related graphs are locked, move on to next selection and lock the corresponding graph.
When you have the layout configured with the locked selections you can save the configurations as well as the implicit selections temporarily to the layout. Until the layout is particularized onto the explicit selections, you will need to select the related items in Motive to plot the respective graphs.
The last step is to make the selection explicit by particularizing the layout. You can do this by clicking the Particularize option under the pane menu once the layout is configured and desired selections are locked. This will fix the explicit selection onto the layout XML file, and the layout will always look for specific items with the same name from the Take. Particularized graphs will be indicated by at the top-right corner of the graph.
It is important to particularize the customized layout once all of the graphs are configured. This action will save and explicitly fix the locked selections that the graphs are locked onto. Once the layouts have been particularized, you can re-open the same layout on different sessions and plot the data channels from the same subject without locking the selection again. Specifically, the particularized layout will try to look for items (labeled marker, Rigid Body, Skeleton, force plate, or analog channels) with the same names that the layout is particularized to.
The Graph Editor can be expanded by clicking on the icon from the toolbar. When this sidebar is expanded, you can select individual graphs but other navigation controls will be disabled. Using the graph editor, you can select a graph, choose which data channels to plot, and format the overall look to suit your need.
Only enabled, or checked, data channels will be plotted on the selected graph using the specified color. Once channels are enabled, an asset (marker, Rigid Body, Skeleton, force plate, or DAQ channel) must be selected and locked.
Plot 3D position (X/Y/Z) data of selected, or locked, marker(s) onto the selected graph.
Plot pivot point position (X/Y/Z), rotation (pitch/yaw/roll), or mean error values of selected, or locked, Rigid Body asset(s) onto the selected graph.
Plot analog data of selected analog channel(s) from a data acquisition (NI-DAQ) device onto the selected graph.
Plot force and moment (X/Y/Z) of selected force plate(s). Plotted graph respects coordinate system of the force platforms (z-up).
Using the black color (0,0,0) for the plots will set the graph color to the color of the Rigid Body asset shown in the 3D viewport, which is set under the Rigid Body properties.
Labels the selected graph.
Configures the style of the selected graph:
Channel: Plots selected channels onto the graph.
Combined: Plots X/Y/Z curves for each selected markers fixed on the same plot.
Gap: The Tracks View style allows you to easily monitor the occluded gaps on selected markers.
Live: The Live mode is used for plotting the live data.
Enables/disables range handles that are located at the bottom of the frame selection.
Sets the height of the selected row in the layout. The height will be determined by a ratio to a sum of all stretch values: (row stretch value for the selected row)/(sum of row stretch values from all rows) * (size of the pane)
.
Sets the width of the selected column in the layout. The width size will be determined by a ratio to a sum of all values: (column stretch value for the selected column)/(sum of column stretch values from all columns) * (size of the pane)
.
Display current frame values for each data set.
Display name of each plotted data set.
Plots data from the primary selection only. The primary selection is the last item selected from Motive.
Shows/hides x grid-lines.
Shows/hides y grid-lines.
Sets the size of the major grid lines, or tick marks, on the y-axis values.
Sets the size of the minor grid lines, or tick marks, on the y-axis values.
Sets the minimum value for the y-axis on the graph.
Sets the maximum value for the y-axis on the graph.
Merges two trajectories together. This feature is useful when used with the graphs. Select two trajectories and click this button to merge the bottom trajectory into the top trajectory.
Merges two trajectories together. This feature is useful when used with the graphs. Select two trajectories and click this button to merge the top trajectory into the bottom trajectory.