Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page includes all of the Motive tutorial video for visual learners.
Updated videos coming soon!
Motive 3.0.2 Update:
Following the Motive 3.0.2 release, an internet connection is no longer required for initial use of Motive. If you are currently using Motive 3.0.1 or older, please install this new release from our Software webpage. Please note that an internet connection is still required to download Motive.exe from the OptiTrack website.
Important License Update:
New licensing system in Motive 3. Please check the OptiTrack website for details on Motive licenses.
Security Key (Motive 3.x): Starting from version 3.0, USB Security Key will be required to use Motive. USB Hardware Keys that were used for activating older versions of Motive will no longer work with 3.0, and they will need to be replaced with the USB Security key. For any questions, please contact us.
Hardware Key (Motive 2.x or below): Motive 2.x versions will still follow the same system and will require USB Hardware Key
USB Cameras
USB cameras, including Flex series, tracking bars, and Slim3U, cameras are not supported in 3.x versions currently. For USB camera systems, please use Motive 2.x versions. Go to Motive 2.3 documentation.
For More Information:
Visit our website for more information on the new versions:
What's New: https://www.optitrack.com/software/motive/
Changelog and Download link: https://www.optitrack.com/support/downloads/
Below is a quick start guide for most Prime Color and Prime Color FS setups. This setup and settings optimize the Prime Color Camera systems and are strongly recommended for best performance. Please see our full Prime Color Camera pages for more in-depth information on each topic.
Windows 10 or 11 Professional (64 Bit)
Designated 1Gbps NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 1Gbps uplink port
RAM: 16GB+ of memory
GPU: GTX 1050 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
Windows 10 or 11 Professional (64 Bit) Windows IoT (Contact Support)
Designated 10Gbps+ NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 10Gbps+ uplink port
RAM: 32GB+ of memory
GPU: RTX 2070 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
If you experience latency or camera drops, you may need to increase the specifications on certain components, especially if your setup includes larger Prime Color camera counts. Please reach out to our Support team, if you are experiencing any of these issues even after upgrading the following specifications above and setup below.
Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.
Please note that if your aggregation switch is PoE, you can plug your Prime Color Cameras directly into the aggregation switch. PoE injectors are optional and will only be required if your aggregation switch is not PoE.
Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.
For 1-2 Prime Color Cameras it is recommended to use 1Gbps network switch with 1Gbps uplink port and a 1Gpbs NIC or higher. For 3+ Prime Color Cameras it is required to use network switches with a 10Gbps uplink port in conjunction with a 10Gbps designated NIC and their appropriate drivers.
NIC drivers may need to be installed via disc or downloaded from the manufacture's support website. If you're unsure of where to find these drivers or how to install them, please reach out to our Support team.
When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.
You'll want to remove as much bloatware from your PC in order to optimize your system and make sure minimal unnecessary background processes are running. Background process can take up valuable CPU resources from Motive and cause frame drops while running your camera system.
There are many external resources in order to remove unused apps and halt unnecessary background processes, so they will not be covered within the scope of this page.
As a general rule for all OptiTrack camera systems, you'll want to disable all Windows firewalls and either disable or remove any Antivirus software. If firewalls and Antivirus software is enabled, this will cause frame drops while running your camera system.
In order for Motive to run above other processes, you'll need to change the Priority of Motive.exe to High.
Right Click on the Motive shortcut from your Desktop
In the Target: text field enter the below path, this will allow Motive to run at High Priority that will persist from closing and reopening Motive.
C:\Windows\System32\cmd.exe /C start "" /high "C:\Program Files\OptiTrack\Motive\Motive.exe"
Please refrain from setting the priority to Realtime. If Realtime is selected, this can cause loss of input control (mouse, keyboard, etc.) since Windows can prioritize Motive above input processes.
If you're running a system with a CPU with a lower core count, you may need to disable Motive from running on a couple of cores. This will help stabilize the overall system and free up some cores for other Windows required processes.
From the Task Manager, navigate to the Details tab and right click on Motive.exe
Select Set Affinity
From this window, uncheck the cores you wish to disallow Motive.exe to run on.
Click OK
Please note that you should only ever disable 2 cores or less to insure Motive still runs smoothly.
We recommend that you start with only one core and work your way up to two if you're still experiencing frame drop issues with your camera system.
Windows IoT is a stripped down version of Windows OS. This can offer many benefits in terms of running a smooth system with very little 'extras' that come standard with more commercial versions of Windows. Windows IoT can aid further in terms of Prime Color Camera system performance.
If you're still experiencing issues with dropped frames even after altering the settings above, please reach out to our Support team for more information regarding Windows IoT.
In most cases your switch settings will not be required to be altered. However, if your switch has built in Storm Control, you'll want to disable this feature.
Your Network Interface Card has a few settings that you'll need to change in order to optimize your system and reduce issues when capturing Prime Color Camera video.
To navigate to the camera network's NIC:
Open Windows Settings
Select Ethernet from the navigation sidebar
Under Related settings select Change adapter options
From the Network Connections pop up window, right click on your NIC and select Properites
Select the Configure... button and navigate to the Advanced tab
For the Speed and Duplex property, you'll want to change this to the highest throughput of your NIC. If you have a 10Gbps NIC, you'll want to make sure that 10Gbps Full Duplex is selected. This property allows the NIC to operate at it's full range. If this setting is not altered to Full, Windows has the tendency to throttle the NIC throughput causing a 10Gbps NIC to only be sending data at 2Gbps.
Interrupt Moderation allows the NIC to moderate interrupts. When there is a significant amount of data being uplinked to Motive, this can cause more interrupts to occur thus hindering the system performance. You'll want to Disable this property.
After the above properties have been applied, the NIC will need to go through a reboot process. This process is automatic, however, it will make it appear that your camera network is down for a few minutes. This is normal and once the NIC is rebooted, should begin to work as expected.
Although not recommended, you may use a laptop PC to run Prime Color Camera system. When using a laptop PC, you'll need to use an external network adapter for. The above settings will typically not apply to these types of adapters, so no properties will need to changed.
It is important to use a Thunderbolt port adapter with corresponding Thunderbolt ports on your laptop as opposed to a standard USB-C adapters/ports.
By default this value is set to 50, however, depending on the specifications of your particular system this value may need to be lower or can be raised higher so long as your system can handle the increased data output.
By default this value is set to full resolution of 1920 x 1080p. Typically you will not need to alter this setting.
It is recommended to close the Camera's View during recording. This further stabilizes Motive minimizing lag and less frame drops.
This page provides instructions on how to set up and use the OptiTrack active marker solution.
Additional Note
This guide is for OptiTrack active markers only. Third-party IR LEDs will not work with instructions provided on this page.
This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.
Motive version 2.0 or above is required.
This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.
For active components that were shipped prior to September 2017, please see the compatibility notes page for more information about the firmware compatibility.
The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the Base Station and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.
Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.
Sends out radio frequency signals for synchronizing the active markers.
Powered by PoE, connected via Ethernet cable.
Must be connected to one of the switches in the camera network.
Connects to a USB power source and illuminates the active LEDs.
Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.
Emits 850 nm IR light.
4 active LEDs in each bundle and up to two bundles can be connected to each Tag.
(8 Active LEDs (4(LEDs/set) x 2 set) per Tag)
Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA
An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.
Active tracking is supported only with the Ethernet camera system (Prime series or Slim 13E cameras). For instructions on how to set up a camera system see: Hardware Setup.
Connects to one of the PoE switches within the camera network.
For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.
Do not place external electromagnetic or radiofrequency devices near the Base Station.
When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.
BaseStation LEDs
Note: Behavior of the LEDs on the base station is subject to be changed.
Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.
Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.
Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.
Connect two sets of active markers (4 LEDs in each set) into a Tag.
Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.
To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.
Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.
Puck Setup
Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.
For more information, please read through the Active Puck page.
Active Pattern Depth
Settings → Live Pipeline → Solver Tab with Default value = 12
This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.
Minimum Active Count
Settings → Live Pipeline → Solver Tab with Default value = 3
Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting.
Active Marker Color
Settings → Views → 3D Tab with Default color = blue
The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.
For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:
For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.
When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the Devices pane.
Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.
Duplicate active frame IDs
For the active label to properly work, it is important that each marker has a unique active ID. When there are markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.
In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.
When a trackable asset (e.g. Rigid Body) is defined using active markers, its active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.
Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.
If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.
This wiki contains instructions on operating OptiTrack motion capture systems. If you are new to the system, start with the Quick Start Guides to begin your capture experience.
You can navigate through pages using links in the sidebar or using links included within the pages. You can also use the search bar provided on the top-right corner to search for page names and keywords that you are looking for. If you have any questions that are not documented in this wiki or from other provided documentation, please check our forum or contact our Support for further assistance.
OptiTrack website: http://www.optitrack.com
The Helpdesk: http://help.naturalpoint.com
NaturalPoint Forums: https://forums.naturalpoint.com
For versions of Motive 2.2 or older, please visit our old wiki site.
Markersets
With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.
Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the Reconstruction Settings under the Application Settings panel.
When one or more markers are selected in the Live mode or from the 2D Mode of capture data, the corresponding mean residual value is displayed over the Status Panel located at the bottom-right corner of Motive.
First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.
Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.
Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the Hardware Setup pages.
Mounting Locations
For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.
Mount Securely
For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.
Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.
Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the 2D camera preview window in Motive and use the mouse-scroll.
For more information, please read through the Aiming and Focusing workflow page.
The following sections cover key configuration settings which need to be optimized for the precision tracking.
Camera settings are configured using the Devices pane and the Properties pane both of which can be opened under the view tab in Motive.
Details
Number
Varies
Denotes the number that Motive has assigned to that particular camera.
Device Type
Varies
Denotes which type of camera Motive has detected (PrimeX 41, PrimeX 13W, etc.)
Serial Number
Varies
Denotes the serial number of the camera. This information uniquely identifies the camera.
Focal Length
Varies
Denotes the distance between the camera's image sensor and its lens.
General
Enabled
Toggle 'On'
When Enabled is toggled on, the camera is active and able to collect marker data.
Rate
Maximum FPS
Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.
Reconstruction
Toggle 'On'
Denotes when camera is participating in 3D construction.
Rate Multiplier
x1 (120Hz)
Denotes the rate multiplier. This setting is for syncing external devices with the camera system
Exposure
250 μs
Denotes the exposure of the camera. The higher the number, the more microseconds a camera's sensor is exposed to light. If you're having issue with seeing markers, raise the exposure. If there is too much reflection data in the volume, lower the exposure.
Threshold (THR)
200
Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.
LED
Toggle 'On'
In some instances you may want to turn off the IRLEDs on a particular camera. i.e. using an active wand for calibration reduces extraneous reflections from influencing a calibration.
Video Mode
Default: Object Mode
IR Filter
Toggle 'On'
*Special to PrimeX 13/13W, SlimX 13, and Prime Color FS cameras. Toggles from using 850 nm IR filter which allows for only 850 nm IR light to be visible. When toggled off, all light will be visible to the camera's image sensor.
Gain
1: Low (Short Range)
Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.
Display
Show Field of View
Toggle 'Off'
Show Frame Delivery Info
Toggle 'Off'
Live-reconstruction settings can be configured under the application settings panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the Application Settings page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:
< 2.00
Solver Tab: Minimum Rays to Start
≥ 3
Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.
Camera Tab: Minimum Pixel Threshold
≥ 3
Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.
Camera Tab: Circularity
≥ 3
The following calibration instructions are specific to precision tracking. For more general information, refer to the Calibration page.
For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the Calibration pane, select Micron Series under the OptiWand dropdown menu, and define the exact length under the Wand Length.
The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.
Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.
Precision Capture Calibration Tips
Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.
Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.
A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.
Extra wanding in the target area you wish to capture will improve the tracking in the target region.
Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.
Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.
Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.
In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.
For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the circularity filter or the intrusion detection feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group 2D filter settings to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.
There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.
Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.
In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.
An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.
The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the Cameras View in Motive (Cameras View > Eye Icon > Camera Info).
If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.
Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:
Avoid touching the cameras and the camera mounts.
Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.
Closing doors, even from the outside, may be noticeable during recording.
The following methods can be used to check the tracking accuracy and to better optimize the reconstructions settings in Motive.
The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; Perspective View pane > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.
In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page Continuous Calibration.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
In optical motion capture systems, proper camera placement is very important in order to efficiently utilize the captured images from each camera. Before setting up the cameras, it is good idea to plan ahead and create a blueprint of the camera placement layout. This page highlights the key aspects and tips for efficient camera placements.
A well-arranged camera placement can significantly improve the tracking quality. When tracking markers, 3D coordinates are reconstructed from the 2D views seen by each camera in the system. More specifically, correlated 2D marker positions are triangulated to compute the 3D position of each marker. Thus, having multiple distinct vantages on the target volume is beneficial because it allows wider angles for the triangulation algorithm, which in turn improves the tracking quality. Accordingly, an efficient camera arrangement should have cameras distributed appropriately around the capture volume. By doing so, not only the tracking accuracy will be improved, but uncorrelated rays and marker occlusions will also be prevented. Depending on the type of tracking application, capture volume environment, and the size of a mocap system, proper camera placement layouts may vary.
An ideal camera placement varies depending on the capture application. In order to figure out the best placements for a specific application, a clear understanding of the fundamentals of optical motion capture is necessary.
Depending on captured motion types and volume settings, the instructions for ideal camera arrangement vary. For applications that require tracking markers at low heights, it would be beneficial to have some cameras placed and aimed at low elevations. For applications tracking markers placed strictly on the front of the subject, cameras on the rear won't see those and as a result, become unnecessary. For large volume setups, installing cameras circumnavigating the volume at the highest elevation will maximize camera coverage and the capture volume size. For captures valuing extreme accuracy, it is better to place cameras close to the object so that cameras capture more pixels per marker and more accurately track small changes in their position.
For common applications of tracking 3D position and orientation of Skeletons and Rigid Bodies, place the cameras on the periphery of the capture volume. This setup typically maximizes the camera overlap and minimizes wasted camera coverage. General tips include the following:
Mount cameras at the desired maximum height of the capture volume.
Distribute the cameras equidistantly around the setup area.
Adjust angles of cameras and aim them towards the target volume.
For cameras with rectangular FOVs, mount the cameras in landscape orientation. In very small setup areas, cameras can be aimed in portrait orientation to increase vertical coverage, but this typically reduces camera overlap, which can reduce marker continuity and data quality.
TIP: For capture setups involving large camera counts, it is useful to separate the capture volume into two or more sections. This reduces amount of computation load for the software.
Around the volume
For common applications tracking a Skeleton or a Rigid Body to obtain the 6 Degrees of Freedom (x,y,z-position and orientation) data, it is beneficial to arrange the cameras around the periphery of the capture volume for tracking markers both in front and back of the subject.
Camera Elevations
However, it can be beneficial to place cameras at varying elevations. Doing so will provide more diverse viewing angles from both high and low elevations and can significantly increase the coverage of the volume. The frequency of marker occlusions will be reduced, and the accuracy of detecting the marker elevations will be improved.
Camera to Camera Distance
Separating every camera by a consistent distance is recommended. When cameras are placed in close vicinity, they capture similar images on the tracked subject, and the extra image will not contribute to preventing occlusions nor the reconstruction calculations. This overlap detracts from the benefit of a higher camera count and also doubles the computational load for the calibration process. Moreover, this also increases the chance of marker occlusions because markers will be blocked from multiple views simultaneously whenever obstacles are introduced.
Camera to Object Distance
An ideal distance between a camera and the captured subject also depends on the purpose of the capture. A long distance between the camera and the object gives more camera coverage for larger volume setups. On the other hand, capturing at a short distance will have less camera coverage but the tracking measurements will be more accurate. The cameras lens focus ring may need to be adjusted for close-up tracking applications.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left.
Adjustig the Coordinate System Steps
[Motive] Open the Ground Planes page.
[Motive] Click Set Set Ground Plane button, and the global origin will be adjusted.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
Before setting up a motion capture system, choose a suitable setup area and prepare it in order to achieve the best tracking performance. This page highlights some of the considerations to make when preparing the setup area for general tracking applications. Note that this page provides just general recommendations and these could vary depending on the size of a system or purpose of the capture.
First of all, pick a place to set up the capture volume.
Setup Area Size
Make sure there is plenty of room for setting up the cameras. It is usually beneficial to have extra space in case the system setup needs to be altered. Also, pick an area where there is enough vertical spacing as well. Setting up the cameras at a high elevation is beneficial because it gives wider lines of sight for the cameras, providing a better coverage of the capture volume.
Minimal Foot Traffic
After camera system calibration, the system should remain unaltered in order to maintain the calibration quality. Physical contacts on cameras could change the setup, requiring it to be re-calibrated. To prevent such cases, pick a space where there is only minimal foot traffic.
Flooring
Avoid reflective flooring. The IR lights from the cameras could be reflected by it and interfere with tracking. If this is inevitable, consider covering the floor with surface mats to prevent the reflections.
Avoid flexible or deformable flooring; such flooring can negatively impact your system's calibration.
For the best tracking performance, minimize ambient light interference within the setup area. The motion capture cameras track the markers by detecting reflected infrared light and any extraneous IR lights that exist within the capture volume could interfere with the tracking.
Sunlight: Block any open windows that might let sunlight in. Sunlight contains wavelength within the IR spectrum and could interfere with the cameras.
IR Light sources: Remove any unnecessary lights in IR wavelength range from the capture volume. IR lights could be emitted from sources such as incandescent, halogen, and high-pressure sodium lights or any other IR based devices.
Dark-colored objects absorb most of the visible light, however, it does not mean that they absorb the IR lights as well. Therefore, the color of the material is not a good way of determining whether an object will be visible within the IR spectrum. Some materials will look dark to human eyes but appear bright white on the IR cameras. If these items are placed within the tracking volume, they could introduce extraneous reconstructions.
Since you already have the IR cameras in hand, use one of the cameras to check whether there are IR white materials within the volume. If there are, move them out of the volume or cover them up.
Remove any unnecessary obstacles out of the capture volume since they could block cameras' view and prevent them from tracking the markers. Leave only the items that are necessary for the capture.
Remove reflective objects nearby or within the setup area since IR illumination from the cameras could be reflected by them. You can also use non-reflective tapes to cover the reflective parts.
Prime 41 and Prime 17W cameras are equipped with powerful IR LED rings which enables tracking outdoors, even under the presence of some extraneous IR lights. The strong illumination from the Prime 41 cameras allows a mocap system to better distinguish marker reflections from extraneous illuminations. System settings and camera placements may need to be adjusted for outdoor tracking applications.
With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the Live-reconstruction mode or in the 2D Mode. To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.
Changes video mode of the camera. For more information regarding camera video types, please see: .
When toggled on this shows the camera's field of view. This is particularly useful when upon setting up a camera volume.
When toggled on, this setting shows the frame delivery info for all the cameras in the system overlaid on the selected camera's
Solver Tab: [] Residual (mm)
Set the allowable value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.
Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from where the calculated centroid is no longer reliable.
First, go into the perspective view pane > select a marker, then go to the Camera Preview pane > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.
This page is for the general specifications of the Prime Color camera. For details on how to setup and use the Prime color, please refer to the page in this wiki.
USB camera models, including Flex series cameras and V120:Duo/Trio tracking bars, are currently not supported in Motive 3.0.x versions. For those systems, please refer to the .
To calculate 3D marker locations, tracked markers must be simultaneously captured by at least two synchronized cameras in the system. When not enough cameras are capturing the 2D positions, the 3D marker will not be present in the captured data. As a result, the collected marker trajectory will have gaps, and the accuracy of the capture will be reduced. Furthermore, extra effort and time will be required for the data. Thus, marker visibility throughout the capture is very important for tracking quality, and cameras need to be capturing at diverse vantages so that marker occlusions are minimized.
Again, the optimal camera arrangement depends on the purpose and features of the capture application. Plan the camera placement specific to the capture application so that the capability of the provided system is fully utilized. Please if you need consulting with figuring out the optimal camera arrangement.
For typical motion capture setup, placing cameras at high elevations is recommended. Doing so maximizes the capture coverage in the volume, and also minimizes the chance of subjects bumping into the truss structure which can degrade calibration. Furthermore, when cameras are placed at low elevations and aimed across from one another, the synchronized IR illuminations from each camera will be detected, and will need to be from the 2D view.
If you wish to change the location and orientation of the global axis, you can use the ground plane tools from the and use a Rigid Body or a calibration square to set the global origin.
When using the Duo/Trio tracking bars, you can set the coordinate origin at the desired location and orientation using either a Rigid Body or a as a reference point. Using a calibration square will allow you to set the origin more accurately. You can also use a custom calibration square to set this.
First set place the calibration square at the desired origin. If you are using a Rigid Body, its position and orientation will be used as the reference.
[Motive] Open the .
[Motive] Select the type of calibration square that will be used as a reference to set the global origin. Set it to Auto if you are using a calibration square from us. If you are using a Rigid Body, select the Rigid Body option from the drop-down menu. If you are using a , you will need to set the vertical offset also.
[Motive] Select the Calibration square markers or the Rigid Body markers from the
System setup area depends on the size of the mocap system and how the cameras are positioned. To get a general idea, check out the feature on our website.
All cameras are equipped with IR filters, so extraneous lights outside of the infrared spectrum (e.g. fluorescent lights) will not interfere with the cameras. IR lights that cannot be removed or blocked from the setup area can be masked in Motive using the during the system calibration. However, this feature completely discards image data within the masked regions and an overuse of it could negatively impact tracking. Thus, it is best to physically remove the object whenever possible.
Please read through the page for more information.