Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.
Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the Reconstruction Settings under the Application Settings panel.
When one or more markers are selected in the Live mode or from the 2D Mode of capture data, the corresponding mean residual value is displayed over the Status Panel located at the bottom-right corner of Motive.
First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.
Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.
Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the Hardware Setup pages.
Mounting Locations
For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.
Mount Securely
For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.
Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.
Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the 2D camera preview window in Motive and use the mouse-scroll.
For more information, please read through the Aiming and Focusing workflow page.
The following sections cover key configuration settings which need to be optimized for the precision tracking.
Camera settings are configured using the Devices pane and the Properties pane both of which can be opened under the view tab in Motive.
Gain
1: Low (Short Range)
Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.
Frame Rate
Maximum FPS
Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.
Threshold (THR) IR LED
200 15
Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.
Exposure (EXP)
Most stable
For the precision capture, it is not always necessary to set the camera exposure to its lowest value. Instead, the exposure setting should be configured so that the reconstruction is most stable. Zoom into a marker and examine the jitters while changing the exposure setting, and use the exposure value that gives the most stable reconstruction. Later sections will cover how to check the reconstruction and tracking quality. For now, set this number as low as possible while maintaining the tracking without losing the contrast of the reflections.
Live-reconstruction settings can be configured under the application settings panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the Application Settings page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:
Residual (mm)
< 2.00
Minimum Rays
≥ 3
Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.
Minimum Thresholded Pixels
≥ 4
Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.
Circularity
≥ 0.6
The following calibration instructions are specific to precision tracking. For more general information, refer to the Calibration page.
For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the Calibration pane, select Micron Series under the OptiWand dropdown menu, and define the exact length under the Wand Length.
The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.
Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.
Precision Capture Calibration Tips
Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.
Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.
A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.
Extra wanding in the target area you wish to capture will improve the tracking in the target region.
Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.
Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.
Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.
In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.
For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the circularity filter or the intrusion detection feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group 2D filter settings to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.
There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.
Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.
In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.
An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.
The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the Cameras View in Motive (Cameras View > Eye Icon > Camera Info).
If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.
Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:
Avoid touching the cameras and the camera mounts.
Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.
Closing doors, even from the outside, may be noticeable during recording.
The following methods can be used to check the tracking accuracy and to better optimize the reconstructions settings in Motive.
The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; Perspective View pane > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.
In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page Continuous Calibration.
Set the allowable value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.
Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from where the calculated centroid is no longer reliable.
First, go into the perspective view pane > select a marker, then go to the Camera Preview pane > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.
This wiki contains instructions on operating OptiTrack motion capture systems. If you are new to the system, start with the Quick Start Guides to begin your capture experience.
You can navigate through pages using links in the sidebar or using links included within the pages. You can also use the search bar provided on the top-right corner to search for page names and keywords that you are looking for. If you have any questions that are not documented in this wiki or from other provided documentation, please check our forum or contact our Support for further assistance.
OptiTrack website: http://www.optitrack.com
The Helpdesk: http://help.naturalpoint.com
NaturalPoint Forums: https://forums.naturalpoint.com
Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.
Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
Minimize ambient lights, especially sunlight and other infrared light sources.
Clean capture volume. Remove unnecessary obstacles within the area.
Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
USB Cables: Keep USB cable length restrictions in mind, each USB 2.0 cable must not exceed 5 meters in length.
Connect the OptiHub(s) into a Host PC: Use USB 2.0 cables (type A/B) to connect each OptiHub into a host PC. To optimize available bandwidth, evenly split the OptiHub connections between different USB adapters of the host PC. For large system setups, up to two 5 meters active USB extensions can be used for connecting an OptiHub, providing total 15 meters in length.
Power the Optihub: Use provided power adapters to connect each OptiHub into an external power. All USB cameras will be powered by the OptiHub(s).
Connect the Cameras into the OptiHub(s): Use USB 2.0 cables (type B/mini-b) to connect each USB camera into an OptiHub. When using multiple OptiHubs, evenly distribute the camera connections among the OptiHubs in order to balance the processing load. Note that USB extensions are not supported when connecting a camera into an OptiHub.
Multiple OptiHubs: Up to four OptiHubs, 24 USB cameras, can be used in one system. When setting up multiple OptiHubs, all OptiHubs must be connected, or cascaded, in a series chain with RCA synchronization cables. More specifically, a Hub SYNC Out port of one OptiHub needs to be connected into a Hub Sync In port of another OptiHub, as shown in the diagram.
External Sync: When integrating external devices, use the External Sync In/Out ports that are available on each OptiHub.
Duo/Trio Tracking bars uses the I/O-X USB hub for powering the device (3.0 A), connecting to the computer (USB A-B), and synchronizing with external devices.
See Also: Network setup page.
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require re-calibration. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
See Also: Camera Placement and Camera Mount Structures pages.
In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.
To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.
See Also: Aiming and Focusing page.
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.
If you are new to Motive, we recommend you to read through Motive Basics page after going through this guide to learn about basic navigation controls in Motive.
Motive Activation Requirements
The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.
Motive 2.x license
USB Hardware Key
Host PC Requirements
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers
OS: Windows 10, 11 (64-bit)
CPU: Intel i7
RAM: 4GB of memory
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
The first time Motive 2.3.x is installed on a computer, the following software also needs to be installed:
Microsoft Visual C++ Redistributables 2013 and 2015
Microsoft DirectX 9c
OptiTrack USB Drivers
It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.
License Activation Steps
Insert the USB Hardware Key into a USB-A port on the computer. If needed, you can also use a USB-A adapter to connect.
Launch Motive
Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.
Notes on using USB Hardware Key
When connecting the USB Hardware Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.
When you first launch Motive, the Quick Start panel will show up, and you can use this panel to quickly get started on specific tasks. By default, Motive will start on the Calibration Layout. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. Note that the initial layout may be slightly different for different camera models or software licenses.
The following table briefly explains purposes of some of the panels on the initial layout:
Quick Start Panel
The quick start panel provides quick access to typical initial actions when using Motive. Each option will quickly lead you to the layouts and actions for corresponding selection. If you wish not to see this panel again, you can uncheck the box at the bottom. This panel can be re-accessed under the Help tab.
Devices pane
Properties pane
Perspective View pane
Camera Preview pane
Calibration pane
Control Deck
See Also: List of UI pages from the Motive section of the wiki.
Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle selection in View
CTRL + left mouse click
Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the Devices pane page.
To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the Aim Assist button for supported cameras. In Motive, you can right-click on the Cameras Viewport and switch the video mode in the context menu, or you can also change the video mode through the Properties pane.
Exposure Setting
The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.
Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.
In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. Read through the Calibration page to learn about the calibration process and what other tools are available for more efficient workflows.
See Also: Calibration page.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Masking
Remove any unwanted objects and physically cover any extraneous IR light reflections or interferences within the capture volume.
[Motive:Calibration pane] In Motive, open the Calibration pane or use the calibration layout (CTRL + 1).
Wanding
Bring out the calibration wand.
[Motive:Calibration pane] From the Calibration pane, make sure the Calibration Type is set to Full and the correct type of the wand is specified under the OptiWand section.
[Motive:Calibration pane] Click Start Wanding to begin wanding.
Bring the wand into the capture volume, and wave the wand throughout the volume and allow cameras to collect wanding samples.
[Motive:Calibration pane] When the system indicates enough samples have been collected, click the Calculate button to begin the calculation. This may take few minutes.
[Motive:Calibration pane] When the Ready to Apply button becomes enabled, click Apply Result.
[Motive] Calibration results window will be displayed. After examining the wanding result, click Apply to apply the calibration.
Wanding tips
For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.
During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.
Setting the Ground Plane
Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.
Now that all of the cameras have been calibrated, you need to define the ground plane of the capture volume.
Place a calibration square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
Level the calibration square parallel to the ground plane.
(Optional) In the 3D view in Motive, select the calibration square markers. If retro-reflective markers on the calibration square are the only reconstructions within the capture volume, Motive will automatically detect the markers.
Access the Ground Plane tab in the Calibration pane.
While the calibration square markers are selected, click Set Ground Plane from the Ground Plane Calibration Square section.
Motive will prompt you to save the calibration file. Save the file to the corresponding session folder.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.
Motive Recordings
See Also: Motive Basics page.
Motive Profiles
Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.
Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Builder pane, go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
See Also: Markers page for marker types, or Rigid Body Tracking and Skeleton Tracking page for placement directions.
Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.
Create Rigid Body
To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the Builder pane to define the Rigid Body.
Create Skeleton
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the Builder pane and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Builder pane. In some cases, you may not need to select the markers if only the desired actor is in view.
See Also: Rigid Body Tracking page and Skeleton Tracking page.
Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Control Deck at the bottom, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout
General Editing Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.
Manual Label: Labeling individual markers manually using the Labeling, assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.
See Also: Labeling page.
Changing Marker Labels and Colors
When needed, you can use the Marker Sets pane to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Data pane. Multiple Takes can be selected and exported from Motive or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Graph View pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 5, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
See Also: Data Streaming page
This page provides instructions on how to set up, configure, and use the Prime Color video camera.
Prime Color
The Prime Color is a full-color video camera that is capable of recording synchronized high-speed and videos. It can also be hooked up to a mocap system and used as a reference camera. The camera enables recording of high frame rate videos (up to 500 FPS at 480p) with resolutions up to 1080p (at 250 FPS) by performing onboard compression (H.264) of captured frames. It connects to the camera network and receives power by a standard PoE connection.
eStrobe
When capturing high-speed videos, the time-length of camera exposures are very short, and thus, providing sufficient lighting becomes critical for obtaining clear images. The eStrobe is designed to optimally brighten the image taken by Prime Color camera by precisely synchronizing the illuminations of the eStrobe LEDs to each camera exposure. This allows the LEDs to illuminate at a right timing, producing the most efficient and powerful lighting for the high-speed video capture. Also, the eStrobe emits white light only, and it will not interfere with the tracking within the IR spectrum.
The eStrobe is intended for indoor use only. For capturing outdoors, the sunlight will provide sufficient lighting for the high-speed capture.
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers
OS: Windows 10, 11 (64-bit)
CPU: Intel i5
RAM: 4GB of memory
For using Prime Color cameras, it requires the computer to be equipped with a dedicated graphics card that has a performance of GTX 1050, or better, with the latest driver that supports OpenGL version 4.0 or higher.
Since each color camera can upload a large amount of data over the network, the size of the recorded Take (TAK) can get pretty large even with a short recording. For example, if a 10-second take was recorded with a total data throughput of 1-GBps, the resulting TAK file will be 10-GB, and it can quickly fill up the storage device. Please make sure there is enough capacity available on the disk drive. If you are exporting out the recorded data onto video files after they are captured, re-encoding the videos will help with reducing the files magnitudes smaller. See: Re-encoding
Since Prime Color cameras can output a large amount of data to the RAM memory quickly, it is also important that the write-out speed to the storage is also fast enough. If the write-out speed to secondary drive isn't fast enough, the occupied memory in RAM storage may gradually increase to its maximum. For recording with just a one or two Prime Color cameras, standard SSD drive will do its job. However, when using multiple Prime Color cameras, it is recommended to use a fast storage drive (e.g. M.2 SSD) that can quickly write out the recorded capture that from the RAM.
When running two or more Prime Color cameras, the computer must have a 10-gigabit network adapter in order to successfully receive all of the data outputted from the camera system. Please see Load Balancing section for more information.
Different types of lenses can be equipped on a Prime Color camera as long as the lens mount is compatible, however, for Prime Color cameras, we suggest using C-mount lenses to fully utilize the imager. Prime Color cameras with C-mount can be equipped with either the 12mm F#1.8 lenses or the 6.8mm F#1.6 lenses. The 12mm lens is zoomed in more and is more suitable for capturing at long ranges. On the other hand, the 6.8mm lens has a larger field of view and is more suitable for capturing a wide area. Both lenses have adjustable f-stop and focus settings, which can be optimized for different capture environments and applications.
F-Stop: Set the f-stop to a low value to make the aperture size bigger. This will allow in more light onto the imager, improving the image quality. However, this may also decrease the camera's depth of field, requiring the lens to be focused specifically on the target capture area.
Focus: For best image quality, make sure the lenses are focused on the target tracking area.
6.5mm F#1.6 lens: When capturing 1080p images with 6.5mm F#1.6 lens, you may see vignetting in each corner of the captured frames due to imager size limitations. For larger FOV, please use the 6.8mm F#1.6 lens to avoid this vignetting issue.
Before going into details of setting up a system with Prime Color cameras, it is important to go over the data bandwidth availability within the camera network. At its maximum bit-rate setting for capturing the best quality image, one Prime Color camera can transmit data at a rate of up to ~100 Megabytes-per-second (MBps), or ~800 Megabits-per-second (Mbps). For a comparison, a tracking camera in Object Mode outputs data at a rate less than 1MBps, which is several magnitudes smaller than the output from a Prime Color camera. A standard network switch (1 Gb switch) and network card only support network traffic of up to 1000 Mbps (or 1 Gbps). When Prime Color camera(s) are used, they can take up a large portion, or all, of the available bandwidth, and for this reason, extra attention to bandwidth use will be needed when first setting up the system.
When there is not enough available bandwidth, captured 2D frames may drop out due to the data bottleneck. Thus, it is important to take the bandwidth consumption into account and make sure an appropriate set of network switches (PoE and Uplink), Ethernet cables, and a network card is used. If a 1-Gb network/uplink switch is used, then only one Prime Color camera can be used at its maximum bit-rate setting. If two or more Prime Color cameras need to be used, then either a 10-Gb network setup will be required OR the bit-rate setting will need to be turned down. A lower bit-rate will further compress the image with a tradeoff on the image quality, which may or may not be acceptable depending on the capture application.
Detecting Dropped 2D Frames
Every 2D frame drops are logged under the Log Pane, and it can also be identified in the Devices pane. It will be indicated with a warning sign next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, that indicates there is a problem with receiving the camera data. If this is happening with Prime Color cameras, try lowering down the bit-rate, and if the system stops dropping frames, that means there wasn’t enough bandwidth availability. To use the cameras in a higher bit-rate setting, you will need to properly balance out the load within the available network bandwidth.
Note: Due to the current architecture of our bug reporting in Motive, a single color camera will not display dropped frame messages. If you need these messages you will need to either connect another camera or an eSync 2 into the system.
Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.
Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.
When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.
The eStrobe synchronizes with Prime Color cameras through RCA cable connection. It receives exposure signals from the cameras and synchronizes its illuminations correspondingly. Depending on the frame rate of the camera system, the eStrobe will vary its illumination frequency, and it will also vary the percent duty cycle depending on the exposure length. Multiple eStrobes can be daisy-chained in series by relaying the sync signal from the output port to the input port of another as shown in the diagram.
Illumination:
The eStrobe emits only white light and does not interfere with tracking within the IR spectrum. In other words, its powerful illumination will not introduce noise to the IR tracking data.
Power Requirement:
The amount of power drawn by each eStrobe will vary depending on the system frame rate as well as the length of camera exposures, because the eStrobe is designed to vary its illumination rate and percent duty cycle depending on those settings.At maximum, one eStrobe can draw up to 240 Watts of power. A typical 110V wall outlet outputs 110V @ 15A; which totals up to 1650W of power. Also, there may be other factors such as restrictions from the surge protector or extension cords that are used. Therefore, in general, we recommend connecting no more than five eStrobes onto a single power source.
Warning:
Please be aware of the hot surface. The eStrobe will get very hot as it runs.
Avoid looking directly at the eStrobe, it could damage your eyes.
Make sure the power strips or extension cords are able to handle the power. Using light-duty components could damage the cords or even the device if they cannot sufficiently handle the amount of the power drawn by the eStrobes.
The eStrobe is not typically needed for outdoor use. Sunlight should provide enough lighting for the capture.
When capturing without eStrobes, the camera entirely relies on the ambient lighting to capture the image, and the brightness of the captured frames may vary depending on which type of light source is used. In general, when capturing without an eStrobe, we recommend setting the camera at a lower framerate (30~120 FPS) and increasing the camera exposure to allow for longer exposure time so that the imager can take in more light.
Indoor
When capturing indoors without the eStrobe, you will be relying on the room lighting for brightening up the volume. Here, it is important to note that every type of artificial light source illuminates, or flickers, at a certain frequency (e.g. fluorescent light bulbs typically flicker at 120Hz). This is usually fast enough so that the flickering is not noticeable to human eyes, however, with high-speed cameras, the flickering may become apparent.
When Prime Color captures at a frame rate higher than the ambient illumination frequency, you will start noticing brightness changes between consecutive frames. This happens because, with mismatching frequencies, the cameras are exposing at different points of the illumination phase. For example, if you capture at 240FPS with 120Hz light bulbs lighting up the volume, brightness of captured images may be different in even and odd numbered frames throughout the capture. Please take this into consideration and provide appropriate lighting as needed.
Info: Frequencies of typical light bulbs
Fluorescent: Fluorescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.
Incandescent: Incandescent light bulbs typically illuminate at 120 Hz with 60 Hz AC input.
LED light bulbs: Variable depending on the manufacturer.
eStrobe: LEDs on the eStrobe will be synchronized to the exposure signal from the cameras and illuminate at the same frequency.
Outdoor
When capturing outdoors using Prime Color cameras, sunlight will typically provide enough ambient lighting. Unlike light bulbs, sunlight is emitted continuously, so there is no need to worry about the illumination frequency. Furthermore, the sun is bright enough and you should be able to capture high-quality images by adjusting only the f-stop (aperture size) and the exposure values.
Now that you have set up a camera system with Prime Color, all of the connected cameras should be listed under the Devices pane. At this point, you would want to launch Motive and check the following items to make sure your system is operating properly.
2D Frame Delivery: There should be no dropped 2D frames. You can monitor this under the Log pane or from the Devices pane. If frame drops are reported continuously, you can lower down the bit-rate setting or revisit the network configuration and make sure the data loads are balanced out. For more information, Data Bandwidth section of this page.
CPU Usage: Open the windows task manager and check the CPU processing load. If only one of the CPU core is fully occupied, the CPU is not fast enough to process data from the color camera. In this case, you will want to use a faster CPU or lower down the bit-rate setting.
RAM Usage: Open the windows task manager and check the memory usage. If the RAM usage slowly creeps up to the maximum memory while recording a take, it means the disk driver is not fast enough to write out the color video from RAM. You will have to reduce the bit-rate setting or use a faster disk drive (e.g. M.2 SSD).
Hard Drive Space: Make sure there is enough memory capacity available on the computer. Take files (TAK) with color camera data can be quite large, and it could quickly fill up the memory, especially, when recording lightly-compress video from multiple color cameras.
When you launch Motive, connected Prime Color cameras will be shown in Motive, and you will be able to configure the settings as you would do for other tracking cameras. Open up the Devices pane and the Properties pane, and select a Prime Color camera(s). On the Properties pane, key properties that are specific to the selected color cameras will be listed. Optimizing these settings are important in order to obtain best quality images without overflooding the network bandwidth. The key settings for the color cameras are image resolution, gamma correction, as well as compression mode and bit-rate settings, which will be covered in the following sections.
Default: 1920, 1080
This property sets the resolution of the images that are captured by selected cameras. Since the amount of data increases with higher resolution, depending on which resolution is selected, the maximum allowable frame rate will vary. Below is the maximum allowed frame rates for each respective resolution setting.
960 x 540 (540p)
500 FPS
1280 x 720 (720p)
360 FPS
1920 x 1080 (1080p)
250 FPS
Default: Constant Bit Rate.
This property determines how much the captured images will be compressed. The Constant Bit-Rate mode is used by default and recommended because it is easier to control the data transfer rate and efficiently utilize the available network bandwidth.
Constant Bit-Rate
In the Constant Bit-Rate mode, Prime Color cameras vary the degree of image compression to match the data transmission rate given under the Bit Rate settings. At a higher bit-rate setting, the captured image will be compressed less. At a lower bit-rate setting, the captured image will be compressed more to meet the given data transfer rate, but compression artifacts may be introduced if it is set too low.
Variable Bit-Rate
Variable Bit-Rate setting is also available for keeping the amount of the compression constant and allowing the data transfer rate to vary. This mode can be beneficial when capturing images with objects that have detailed textures because it keeps the amount of compression same on all frames. However, this may introduce dropped frames whenever the camera tries to compress highly detailed images because it will increase the data transfer rate; which may overflow the network bandwidth as a result. For this reason, we recommend using the Constant Bit-Rate setting in most applications.
Default: 50
Available only while using Constant Bit-rate Mode
Bit-rate setting determines the transmission rate outputted from the selected color camera. The value given under this setting is measured in percentage (100%) of the maximum data transmission speed, and each color camera can output up to ~100 MBps. In other words, the configured value will indirectly represent the transmission rate in Megabytes per second (MBps). At bit-rate setting of 100, the camera will capture the best quality image, however, it could overload the network if there is not enough bandwidth to handle the transmitted data.
Since the bit-rate controls the amount of data outputted from each color camera, this is one of the most important settings when properly configuring the system. If your system is experiencing 2D frame drops, it means one of the system requirements is not met; either network bandwidth, CPU processing, or RAM/disk memory. In such cases, you could decrease the bit-rate setting and reduce the amount of data output from the color cameras.
Image Quality
The image quality will increase at a higher bit-rate setting because it records a larger amount of data, but this will result in large file sizes and possible frame drops due to data bandwidth bottleneck. Often, the desired result is different depending on the capture application and what it is used for. The below graph illustrates how the image quality varies depending on the camera framerate and bit-rate settings.
Tip: Monitoring data output from each camera
Default : 24
Gamma correction is a non-linear amplification of the output image. The gamma setting will adjust the brightness of dark pixels, midtone pixels, and bright pixels differently, affecting both brightness and contrast of the image. Depending on the capture environment, especially with a dark background, you may need to adjust the gamma setting to get best quality images.
Default: On
If you are using the eStrobes to light up the capture volume, the LED setting must be enabled on the Prime Color cameras which the eStrobes connect to. When this setting is enabled, the Prime Color camera will start outputting the signals from its RCA sync output port, allowing the eStrobes to receive this signal and illuminate the LEDs.
In order to calibrate the color camera into the 3D capture volume, the Prime Color camera must be equipped with an IR filter switcher. Prime Color cameras without IR filter switcher cannot be calibrated, and can only be used as a reference camera to monitor the reference views in the 2D Camera View pane or in the Cameras viewport.
When loaded into Motive, Prime Color cameras without IR filter switcher will be hidden in the 3D viewport. Only Prime Color camera with the filter switcher will be shown in the 3D space.
The Prime Color FS is equipped with a filter switcher that allows the cameras to detect in IR spectrum. The Prime Color FS can be calibrated into the 3D capture volume using an active calibration wand with the IR LEDs. Once calibrated, the color camera will be placed within the 3D viewport along with other tracking cameras, and 3D assets (Marker Sets, Rigid Body, Skeletons, cameras) can be overlaid as shown in the image.
To calibrate the camera, switch the Prime Color FS to the Object mode in the Camera Preview pane. This will switch the Color camera to detect in the IR spectrum, and then use the active wand to follow the standard Calibration process. Once the calibration is finished, you can switch the camera back to the Color Video Mode.
Active Wand:
Currently, we only take custom orders for the active wands, but in the future, this will be available for sale. For additional questions about active wands, please contact us.
Once you have set up the system and configured the cameras correctly, Motive is now ready to capture Takes. Recorded TAK files will contain color video along with the tracking data, and you can play them back in Motive. Also, the color reference video can be exported out from the TAK.
Once the camera is set up, you can start recording from Motive. Captured frames will be stored within the TAK file and you can access them again in Edit mode. Please note that capture files with Prime Color video images will be much larger in file size.
Once the color videos have been saved onto TAK files, the captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Video for the current TAK can be exported by clicking File tab -> Export Video option in Motive, or you can also export directly from the Data pane by right-clicking on the Take(s) and clicking Export Video from the context menu. The following export dialogue window will open and you will be able to configure the export settings before outputting the files:
When this is set to Drop Frames, Motive will remove any dropped frames in the color video upon export. Please note that any dropped frames will be completely removed in this case, and thus, the exact frames in the exported file may not match the frames in the corresponding Motive recording. If needed, you can set this export option to Black Frame to insert black, or blank, frames in place of the dropped frames in the exported video.
If there are multiple TAK files containing reference video recordings, you can export the videos all at once in the Data pane or through the Motive Batch Processor. When exporting directly from the Data pane, simply CTRL-select multiple TAK files together, right-click to bring up the context menu, and click Export Video. When using the batch processor (NMotive), the VideoExporter class can be used to export videos from loaded TAK files.
The size of the exported video file can be re-encoded and compressed down further by additional subsampling. This can be achieved using a third-party video processing software, and doing so can hugely reduce the size of the exported file; almost in orders of two magnitudes. This is supported by most of the high-end video editing software, but Handbrake (https://handbrake.fr/) is a freely available open-source software that is also capable of doing this. Since the exported video file can be large in size, we suggest using one of the third-party software to re-encode the exported video file.
This page provides instructions on how to set up and use the OptiTrack active marker solution.
Additional Note
This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.
Motive version 2.0 or above is required.
This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.
The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the BaseStation and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.
Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.
Sends out radio frequency signals for synchronizing the active markers.
Powered by PoE, connected via Ethernet cable.
Must be connected to one of the switches in the camera network.
Connects to a USB power source and illuminates the active LEDs.
Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.
Emits 850 nm IR light.
4 active LEDs in each bundle and up to two bundles can be connected to each Tag.
(8 Active LEDs (4(LEDs/set) x 2 set) per Tag)
Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA
An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.
Connects to one of the PoE switches within the camera network.
For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.
Do not place external electromagnetic or radiofrequency devices near the Base Station.
When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.
BaseStation LEDs
Note: Behavior of the LEDs on the base station is subject to be changed.
Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.
Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.
Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.
Connect two sets of active markers (4 LEDs in each set) into a Tag.
Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.
To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.
Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.
Puck Setup
Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.
Active Patten Depth
Settings → Live Pipeline → Solver Tab with Default value = 12
This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.
Minimum Active Count
Settings → Live Pipeline → Solver Tab with Default value = 3
Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting
Active Marker Color
Settings → Views → 3D Tab with Default color = blue
The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.
For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:
For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.
Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.
Duplicate active frame IDs
For the active label to properly work, it is important that each marker has a unique active IDs. When there are more than one markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.
In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.
When a trackable asset (e.g. Rigid Body) is defined using active markers, it's active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.
Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.
If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.
PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models have powerful tracking capability that allows tracking outdoors. With strong infrared (IR) LED illuminations and some adjustments to its settings, a Prime system can overcome sunlight interference and perform 3D capture. This page provides general hardware and software system setup recommendations for outdoor captures.
Please note that when capturing outdoors, the cameras will have shorter tracking ranges compared to when tracking indoors. Also, the system calibration will be more susceptible to change in outdoor applications because there are environmental variables (e.g. sunlight, wind, etc.) that could alter the system setup. To ensure tracking accuracy, routinely re-calibrate the cameras throughout the capture session.
Even though it is possible to capture under the influence of the sun, it is best to pick cloudy days for captures in order to obtain the best tracking results. The reasons include the following:
Bright illumination from the daylight will introduce extraneous reconstructions, requiring additional effort in the post-processing on cleaning up the captured data.
Throughout the day, the position of the sun will continuously change as will the reflections and shadows of the nearby objects. For this reason, the camera system needs to be routinely re-masked or re-calibrated.
The surroundings can also work to your advantage or disadvantage depending on the situation. Different outdoor objects reflect 850 nm Infrared (IR) light in different ways that can be unpredictable without testing. Lining your background with objects that are black in Infrared (IR) will help distinguish your markers from the background better which will help with tracking. Some examples of outdoor objects and their relative brightness is as follows:
Grass typically appears as bright white in IR.
Asphalt typically appears dark black in IR.
Concrete depends, but it's usually a gray in IR.
In general, setting up a truss system for mounting the cameras is recommended for stability, but for outdoor captures, it could be too much effort to do so. For this reason, most outdoor capture applications use tripods for mounting the cameras.
Do not aim the cameras directly towards the sun. If possible, place and aim the cameras so that they are capturing the target volume at a downward angle from above.
Increase the f-stop setting in the Prime cameras to decrease the aperture size of the lenses. The f-stop setting determines the amount of light that is let through the lenses, and increasing the f-stop value will decrease the overall brightness of the captured image allowing the system to better accommodate for sunlight interference. Furthermore, changing this allows camera exposures to be set to a higher value, which will be discussed in the later section. Note that f-stop can be adjusted only in PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models.
4. [Camera Setup] Utilize shadows
Even though it is possible to capture under sunlight, the best tracking result is achieved when the capture environment is best optimized for tracking. Whenever applicable, utilize shaded areas in order to minimize the interference by sunlight.
Increase the LED setting on the camera system to its maximum so that IR LED illuminates at its maximum strength. Strong IR illumination will allow the cameras to better differentiate the emitted IR reflections from ambient sunlight.
In general, increasing camera exposure makes the overall image brighter, but it also allows the IR LEDs to light up and remain at its maximum brightness for a longer period of time on each frame. This way, the IR illumination is stronger on the cameras, and the imager can more easily detect the marker reflections in the IR spectrum.
When used in combination with the increased f-stop on the lens, this adjustment will give a better distinction of IR reflections. Note that this setup applies only for outdoor applications, for indoor applications, the exposure setting is generally used to control overall brightness of the image.
*Legacy camera models
This page includes all of the Motive tutorial video for visual learners.
Updated videos coming soon!
This page is an introduction showing how to use OptiTrack cameras to set up an LED Wall for Virtual Production. This process is also called In-Camera Virtual Effects or InCam VFX. This is an industry technique used to simulate the background of a film set to make it seem as if the actor is in another location.
This is a list of required hardware and what each portion is used for.
You will need one computer to drive Motive/OptiTrack and another to drive the Unreal Engine System.
Motive PC - The CPU is the most important component and should use the latest generation of processors.
Unreal Engine PC - Both the CPU and GPU is important. However, the GPU in particular needs to be top of the line to render the scene, for example a RTX 3080 Ti. Setups that involve multiple LED walls stitched together will require graphics cards that can synchronize with each other such as the NVIDIA A6000.
The Unreal Engine computer will also require an SDI input card with both SDI and genlock support. We used the BlackMagic Decklink SDI 4K and the BlackMagic Decklink 8K Pro in our testing, but other cards will work as well.
You will need a studio video camera with SDI out, timecode in, and genlock in support. Any studio camera with these BNC ports will work, and there are a lot of different options for different budgets. Here are some suggestions:
Etc...
Cameras without these synchronization features can be used, but may look like they are stuttering due to frames not perfectly aligning.
A camera dolly or other type of mounting system will be needed to move and adjust the camera around your space, so that the movement looks smooth.
Your studio camera should have a cage around it in order to mount objects to the outside of it. You will need to rigidly mount your CinePuck to the outside. We used SmallRig NATO Rail and Clamps for the cage and Rigid Body mounting fixtures.
You’ll also need a variety of cables to connect from camera back to where the Computers are located. This includes things such as power cables, BNC cables, USB extension cables (optional for powering the CinePuck), etc... These will not all be listed here, since they will depend on the particular setup for your system.
Many systems will want a lens encoder in the mix. This is only necessary if you plan on zooming your lens in/out between shoots. We do not use this device in this example for simplicity.
In order to run your LED wall, you will need two things an LED Wall and a Video Processor.
For large walls composed of LED wall subsections you will need an additional video processor and an additional render PC for each wall as well as an SDI splitter. We are using a single LED wall for simplicity.
The LED Wall portion contains the grid of LED light, the power structure, and ways to connect the panels into a video controller, but does not contain the ability to send an HDMI signal to the wall.
We used Planar TVF 125 for our video wall, but there are many other options out there depending on your needs.
The video processor is responsible for taking an HDMI/Display Port/SDI signal and rendering it on the LED wall. It's also responsible for synchronizing the refresh rate of the LED wall with external sources.
The video processor we used for controlling the LED wall was the Color Light Z6. However, Brompton Technology video processors are a more typical film standard.
You will either need a timecode generator AND a genlock generator or a device that does both. Without these devices the exposure of your camera will not align with when the LED wall renders and you may see the LED wall rendering. These signals are used to synchronize Motive, the cinema camera, LED Walls, and any other devices together.
Timecode - The timecode signal should be fed into Motive and the Cinema camera. The SDI signal from the camera will plug into the SDI card, which will carry the timecode to the Unreal Engine computer as well.
Genlock - The genlock should be fed into Motive, the cinema camera, and the Video Processor(s).
Timecode is for frame alignment. It allows you to synchronize data in post by aligning the timecode values together. (However, it does not guarantee that the cameras expose and the LED wall renders at the same time). There are a variety of different manufactures that will work for timecode generators. Here are some suggestions:
Etc...
Genlock is for frame synchronization. It allows you to synchronize data in real-time by aligning the times when a camera exposes or an LED Wall renders its image. (However, it does not align frame numbers, so one system could be on frame 1 and another on frame 23.) There are a variety of different manufactures that will work for genlock generators. Here are some suggestions:
Etc...
Below is a diagram that shows what devices are connected to each other. Both Genlock and Timecode are connected via BNC ports on each device.
Plug the Genlock Generator into:
eSync2's Genlock-In BNC port
Any of the Video Processor's BNC ports
Studio Video Camera's Genlock port
Plug the TimeCode Generator into:
eSync2's Timecode-In BNC port
Studio Video Camera's TC IN BNC port
Plug the Studio Video Camera into:
Unreal Engine PC SDI IN port for Genlock via the SDI OUT port on the Studio Video Camera
Unreal Engine PC SDI IN port for Timecode via the SDI OUT port on the Studio Video Camera
A rigid board with a black and white checkerboard on it is needed to calibrate the lens characteristics. This object will likely be replaced in the future.
There are a lot of hardware devices required, so below is a rough list of required hardware as a checklist.
Truss or other mounting structure
Prime/PrimeX Cameras
Ethernet Cables
Network Switches
Calibration Wand
Calibration Square
Motive License
License Dongle
Computer (for Motive)
Network Card for the Computer
CinePuck
BaseStation (for CinePuck)
eSync2
BNC Cables (for eSync2)
Timecode Generator
Genlock Generator
Probe (optional)
Extra markers or trackable objects (optional)
Cinema/Broadcast Camera
Camera Lens
Camera Movement Device (ex. dolly, camera rails, etc...)
Camera Cage
Camera power cables
BNC Cables (for timecode, SDI, and Genlock)
USB C extension cable for powering the CinePuck (optional)
Lens Encoder (optional)
Truss or mounting system for the LED Wall
LED Wall
Video Processor
Cables to connect between the LED Wall and Video Processor
HDMI or other video cables to connect to Unreal PC
Computer (for Unreal Engine)
SDI Card for Cinema Camera input
Video splitters (optional)
Video recorder (for recording the camera's image)
Checkerboard for Unreal calibration process
Non-LED Wall based lighting (optional)
Next, we'll cover how to configure Motive for tracking.
After calibrating Motive, you'll want to set up your active hardware. This requires a BaseStation and a CinePuck.
Plug the BaseStation into a Power over Ethernet (PoE) switch just like any other camera.
CinePuck
Firmly attach the CinePuck to your Studio Camera using your SmallRig NATO Rail and Clamps on the cage of the camera.
The CinePuck can be mounted anywhere on the camera, but for best results put the puck closer to the lens.
Turn on your CinePuck, and let it calibrate the IMU bias by waiting until the flashing red and orange lights turn into flashing green lights.
It is recommended to power the CinePuck using a USB connection for the duration of filming a scene to avoid running out of battery power; a light should turn on the CinePuck when the power is connected.
Change the tracking mode to Active + Passive.
Create a Rigid Body out of the CinePuck markers.
For active markers, turning up the residual will usually improve tracking.
Go through a refinement process in the Builder pane to get the highest quality Rigid Body.
Show advanced settings for that Rigid Body, then input the Active Tag ID and Active RF (radio frequency) Channel for your CinePuck.
If you input the IMU properties incorrectly or it is not successfully connecting to the BaseStation, then your Rigid Body will turn red. If you input the IMU properties correctly and it successfully connects to the BaseStation, then it will turn orange and need to go through a calibration process. Please refer to the table below for more detailed information.
You will need to move the Rigid Body around in each axis until it turns back to the original color. At this point you are tracking with both the optical marker data and the IMU data through a process called sensor fusion. This takes the best aspects of both the optical motion capture data and the IMU data to make a tracking solution better than when using either individually. As an option, you may now turn the minimum markers for your Rigid Body down to 1 or even 0 for difficult tracking situations.
After Motive is configured, we'll need to setup the LED Wall and Calibration Board as trackable objects. This is not strictly necessary for the LED Wall, but will make setup easier later and make setting the ground plane correctly unimportant.
Place four to six markers on the LED Wall without covering the LEDs on the Wall.
Use the probe to sample the corners of the LED Wall.
You will need to make a simple plane geometry that is the size of your LED wall using your favorite 3D editing tool such as Blender or Maya. (A sample plane comes with the Unreal Engine Live Link plugin if you need a starting place.)
Any changes you make to the geometry will need to be on the Rigid Body position and not the geometry offset.
You can make these adjustments using the Builder pane, then zeroing the Attach Geometry offsets in the Properties pane.
Place four to six markers without covering the checkered pattern.
Use probe to sample the bottom left vertex of the grid.
Use the gizmo tool to orient the Rigid Body pivot and place pivot in the sampled location.
Next, you'll need to make sure that your eSync is configured correctly.
If not already done, plug your genlock and timecode signals into the appropriately labeled eSync input ports.
Select the eSync in the Devices pane.
In the Properties pane, check to see that your timecode and genlock signals are coming in correctly at the bottom.
Then, set the Source to Video Genlock In, and set the Input Multiplier to a value of 4 if your genlock is at 30 Hz or 5 if your genlock is at a rate of roughly 24 Hz.
Your cameras should stop tracking for a few seconds, then the rate in the Devices pane should update if you are configured correctly.
Make sure to turn on Streaming in Motive, then you are all done with the Motive setup.
Start Unreal Engine and choose the default project under the “Film, Television, and Live Events” section called “InCamera VFX”
Before we get started verify that the following plugins are enabled:
Camera Calibration (Epic Games, Inc.)
OpenCV Lens Distortion (Epic Games, Inc.)
OptiTrack - LiveLink (OptiTrack)
Media Player Plugin for your capture card (For example, Blackmagic Media Player)
Media Foundation Media Player
WMF Media Player
Many of these will be already enabled.
The main setup process consists of four general steps:
1. Setup the video media data.
2. Setup FIZ and Live Link Sources
3. Track and calibrate the camera in Unreal Engine
4. Setup nDisplay
Right click in the Content Browser Panel > Media > Media Bundle and name the Media Bundle something appropriate.
Double click the Media Bundle you just created to open the properties for that object.
Set the Media Source to the Blackmagic Media Source, the Configuration to the resolution and frame rate of the camera, and set the Timecode Format to LTC (Linear Timecode).
Drag this Media Bundle object into the scene and you’ll see your video appear on a plane.
You’ll also need to create two other video sources doing roughly the same steps as above.
Right click in the Content Browser Panel > Media > Blackmagic Media Source.
Open it, then set the configuration and timecode options.
Right click in the Content Browser Panel > Media > Media Profile.
Click Configure Now, then Configure.
Under Media Sources set one of the sources to Blackmagic Media Source, then set the correct configuration and timecode properties.
Before we set up timecode and genlock, it’s best to have a few visual metrics visible to validate that things are working.
In the Viewport click the triangle dropdown > Show FPS and also click the triangle dropdown > Stat > Engine > Timecode.
This will show timecode and genlock metrics in the 3D view.
If not already open you’ll probably want the Window > Developer Tools > Timecode Provider and Window > Developer Tools > Genlock panels open for debugging.
You should notice that your timecode and genlock is noticeably incorrect which will be corrected in later steps below.
The timecode will probably just be the current time.
To create a timecode blueprint, right click in the Content Browser Panel > Blueprint > BlackmagicTimecodeProvider and name the blueprint something like “BM_Timecode”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Timecode > Timecode Provider = “BM_Timecode”.
At this point your timecode metrics should look correct.
Right click in the Content Browser Panel > Blueprint > BlackmagicCustomTimeStep and name the blueprint something like “BM_Genlock”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Framerate > Custom TimeStep = “BM_Genlock”.
Your genlock pane should be reporting correctly, and the FPS should be roughly your genlock rate.
Debugging Note: Sometimes you may need to close then restart the MediaBundle in your scene to get the video image to work.
Shortcut: There is a shortcut for setting up the basic Focus Iris Zoom file and the basic lens file. In the Content Browser pane you can click View Option and Show Plugin Content, navigate to the OptiTrackLiveLink folder, then copy the contents of this folder into your main content folder. Doing this will save you a lot of steps, but we will cover how to make these files manually as well.
We need to make a blueprint responsible for controlling our lens data.
Right click the Content Browser > Live Link > Blueprint Virtual Subject, then select the LiveLinkCameraRole in the dropdown.
Name this file something like “FIZ_Data”.
Open the blueprint. Create two new objects called Update Virtual Subject Static Data and Update Virtual Subject Frame Data.
Connect the Static Data one to Event on Initialize and the Frame Data one to Event on Update.
Right click on the blue Static Data and Frame Data pins and Split Struct Pin.
In the Update Virtual Subject Static Data object:
Disable Location Supported and Rotation Support, then Enable the Focus Distance Supported, Aperture Supported, and Focal Length Supported options.
Create three new float variables called Zoom, Iris, and Focus.
Drag them into the Event Graph and select Get to allow those variables to be accessed in the blueprint.
Connect Zoom to Frame Data Focal Length, connect Iris to Frame Data Aperture, and connect Focus to Frame Data Focus Distance.
Compile your blueprint.
Select your variables and set the default value to the lens characteristics you will be using.
For our setup we had used:
Zoom is 20 mm, Iris is f/2.8 , and the Focus is 260 cm.
Compile and save your FIZ blueprint.
Both Focus and Iris graphs should create an elongated "S" shape based on the two data points provided for each above.
To create a lens file right click the Content Browser > Miscellaneous > Lens File, then name the file appropriately.
Double click the lens file to open it.
Switch to the Lens File Panel.
Click the Focus parameter.
Right click in the graph area and choose Add Data Point, click Input Focus and enter 10, then enter 10 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 1000 and 1000.
Click the Iris parameter.
Right click in the graph area and choose Add Data Point.
Click Input Iris and enter 1.4, then enter 1.4 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 22 and 22.
Save your lens file.
The above process is to set up the valid ranges for our lens focus and iris data. If you use a lens encoder, then this data will be controlled by the input from that device.
In the Window > Live Link pane, click the + Source icon, then Add Virtual Subject.
Choose the FIZ_Data object that we created above in the FIZ Data section of this OptiTrack Wiki page and add it.
Click the + Source icon, navigate to the OptiTrack source, and click Create.
Click Presets and create a new preset.
Edit > Project Settings and Search for Live Link and set the preset that you just created as the Default Live Link Preset.
You may want to restart your project at this point to verify that the live link pane auto-populates on startup correctly. Sometimes you need to set this preset twice to get it to work.
From the Place Actors window create an Empty Actor this will act as a camera parent.
Add it to the nDisplay_InCamVFX_Config object.
Create another actor object and make it a child of the camera parent actor.
Zero out the location of the camera parent actor from the Details pane under Transform.
For out setup, in the image to the right, we have labeled the empty actor “Cine_Parent” and its child object “CineCameraActor1” .
Select the default “CineCameraActor1” object in the World Outliner pane.
In the Details pane there should be a total of two LiveLinkComponentControllers.
You can add a new one by using the + Add Component button.
For our setup we have labeled one live link controller “Lens” and the other “OptiTrack”.
Click Subject Representation and choose the Rigid Body associated with your camera.
Click Subject Representation and choose the virtual camera. Then go to “Lens” Live Link Controller then navigate to Role Controllers > Camera Role > Camera Calibration > Lens File Picker and select the lens file you created. This process allows your camera to be tracked and associates the lens data with the camera you will be using.
Select the Place Actors window to create an Empty Actor and add it to the nDisplay_InCamVFX_Config object.
Zero out the location of this actor.
In our setup we have named out Empty Actor "Checkerboard_Parent"
From the Place Actors window also create a “Camera Calibration Checkerboard” actor for validating our camera lens information later.
Make it a child of the “Checkerboard” actor from before.
Configure the Num Corner Row and Num Corner Cols.
These values should be one less than the number of black/white squares on your calibration board. For example, if your calibration board has 9 rows of alternating black and white squares and 13 columns across of black and white squares, you would input 8 in the Num Corner Row field and 12 in the Num Corner Cols field.
Also input the Square Side Length which is the measurement of a single square (black or white).
Set the Odd Cube Materials and Even Cube Materials to solid colors to make it more visible.
Select "Checkerboard_Parent" and + Add Component of a Live Link Controller.
Add the checkerboard Rigid Body from Motive as the Subject Representation.
At this point your checkerboard should be tracking in Unreal Engine.
Double click the "Lens" file from earlier and go to the Calibration Steps tab and the Lens Information section.
On the right, select your Media Source.
Set the Lens Model Name and Serial Number to some relevant values based on what physical lens you are using for your camera.
The Sensor Dimensions is the trickiest portion to get correct here.
This is the physical size of the image sensor on your camera in millimeters.
You will need to consult the documentation for your particular camera to find this information.
For example, the Sony FS7 is 1920x1080 which we'd input X = 22.78 mm and Y = 12.817 mm for the Sensor Dimensions.
The lens information will calculate the intrinsic values of the lens you are using.
Choose the Lens Distortion Checkerboard algorithm and choose the checkerboard object you created above.
The Transparency slider can be adjusted between showing the camera image, 3D scene, or a mix of both. Show at least some of the raw camera image for this step.
Place the checkerboard in the view of the camera, then click in the 2D view to take a sample of the calibration board.
You will want to give the algorithm a variety of samples mostly around the edge of the image.
You will also want to get some samples of the calibration board at two different distances. One closer to the camera and one closer to where you will be capturing video.
Taking samples can be a bit of an art form.
You will want somewhere around 15 samples.
Once you are done click Add to Lens Distortion Calibration.
With an OptiTrack system you are looking for a RMS Reprojection Error of around 0.1 at the end. Slightly higher values can be acceptable as well, but will be less accurate.
The Nodal Offset tab will calculate the extrinsics or the position of the camera relative to the OptiTrack Rigid Body.
Select the Nodal Offset Checkerboard algorithm and your checkerboard from above.
Take samples similar to the Lens Distortion section.
You will want somewhere around 5 samples.
Click Apply to Camera Parent.
This will modify the position of the “Cine_Parent" actor created above.
Set the Transparency to 0.5.
This will allow you to see both the direct feed from the camera and the 3D overlay at the same time. As long as your calibration board is correctly set up in the 3D scene, then you can verify that the 3D object perfectly overlays on the 2D studio camera image.
In the World Outliner, Right click the Edit nDisplay_InCameraVFX_Config button. This will load the controls for configuring nDisplay.
For larger setups, you will configure a display per section of the LED wall. For smaller setups, you can delete additional sections (VP_1, VP_2, and VP_3) accordingly from the 3D view and the Cluster pane.
For a single display:
Select VP_0 and in the Details pane set the Region > W and H properties to the resolution of your LED display.
Do the same for Node_0 (Master).
Select VP_0 and load the plane mesh we created to display the LED wall in Motive.
An example file for the plane mesh can be found in the Contents folder of the OptiTrack Live Link Plugin. This file defines the physical dimensions of the LED wall.
Select the "ICVFXCamera" actor, then choose your camera object under In-Camera VFX > Cine Camera Actor.
Compile and save this blueprint.
Click Export to save out the nDisplay configuration file. (This file is what you will be asked for in the future in an application called Switchboard, so save it somewhere easy to find.)
Go back to your main Unreal Engine window and click on the nDisplay object.
Click + Add Component and add a Live Link Controller.
Set the Subject Representation to the Rigid Body for your LED Wall in Motive and set the Component to Control to “SM_Screen_0”.
At this point your LED Wall should be tracked in the scene, but none of the rendering will look correct yet.
To validate that this was all setup correctly you can turn off Evaluate Live Link for your CineCamera and move it so that it is in front of the nDisplay LED Wall.
Make sure to re-enable Evaluate Live Link afterwards.
The next step would be to add whatever reference scene you want to use for your LED Wall Virtual Production shoot. For example, we just duplicated a few of the color calibrators (see image to the right) included with the sample project, so that we have some objects to visualize in the scene.
If you haven’t already you will need to go to File > Save All at this point. Ideally, you should save frequently during the whole process to make sure you don’t lose your data.
Click the double arrows above the 3D Viewport >> and choose Switchboard > Launch Switchboard Listener. This launches an application that listens for a signal from Switchboard to start your experience.
Click the double arrows above the 3D Viewport >> and choose Launch Switchboard.
If this is your first time doing this, then there will be a small installer that runs in the command window.
A popup window will appear.
Click the Browse button next to the uProject option and navigate to your project file (.uproject).
Then click Ok and the Switchboard application will launch.
In Switchboard click Add Device, choose nDisplay, click Browse and choose the nDisplay configuration file (.ndisplay) that you created previously.
In Settings, verify that the correct project, directories and nDisplay are being referenced.
Click the power plug icon to Connect all devices.
Make sure to save and close your Unreal Engine project.
Click the up arrow button to Start All Connected Devices.
The image on the LED wall should look different when you point the camera at it, since it is calculating for the distortion and position of the lens. From the view of the camera it should almost look like you are looking through a window where the LED wall is located.
You might notice that the edge of the camera’s view is a hard edge. You can fix this and expand the field of view slightly to account for small amounts of lag by going back to your Unreal Engine project into the nDisplay object.
Select the "ICVFXCamera" object in the Components pane.
In the Details pane set the Field of View Multiplier to a value of about 1.2 to account for any latency, then set the Soft Edge > Top and Bottom and Sides properties to around .25 to blur the edges.
From an outside perspective, the final product will look like a static image that updates based on where the camera is pointing. From the view of the cameras, it will essentially look like you are looking through a window to a different world.
In our example, we are just tracking a few simple objects. In real productions you’ll use high quality 3D assets and place objects in front of the LED wall that fit with the scene behind to create a more immersive experience, like seen in the image to the right. With large LED walls, the walls themselves provide the natural lighting needed to make the scene look realistic. With everything set up correctly, what you can do is only limited by your budget and imagination.
Connected cameras will be listed under the . This panel is where you configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
When an item is selected in Motive, all of its related properties will be listed under the . For an example, if you have selected a skeleton in the 3D viewport, its corresponding properties will get listed under this pane, and you can view the settings and configure them as needed. You can also select connected cameras, sync devices, rigid bodies, any external devices listed in the , or recorded Takes to view and configure their properties. This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
The is where 3D data is displayed in Motive. Here, you can view, analyze, and select reconstructed 3D coordinates within a calibrated capture volume. This panel can be used both in live capture and recorded data playback. You can also select multiple markers and define rigid bodies and skeleton assets. If desired, additional view panes can be opened under the or by clicking icons on the main toolbar.
The Camera Preview pane shows 2D views of cameras in a system. Here you can monitor each camera view and apply . This pane is also used to examine 2D objects (circular reflections) that are captured, or filtered, in order to examine what reflections are processed and reconstructed into 3D coordinates. If desired, additional view panes can be opened under the or by clicking icons on the main toolbar.
The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the or by clicking icon on the main toolbar.
The , located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).
[Motive:Calibration pane] Click the button from the Camera Preview pane.
[Motive:Calibration pane] Mask the remaining extraneous reflections using Motive. Click Block Visible from the Calibration pane, or use the icon in the Camera Preview pane, to apply software masking to automatically block any light sources or reflections that cannot be removed from the volume. Once the maskings are applied, all of the extraneous reflections (white) in the 2D Camera Preview pane will be covered with red pixels.
Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the Data pane by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.
Data output from the entire camera system can be monitored through the Status Panel. Output from individual cameras can be monitored from the 2D Camera Preview pane when the Camera Info is enabled under the visual aids () option.
This guide is for only. Third-party IR LEDs will not work with instructions provided on this page.
For active components that were shipped prior to September 2017, please see the page for more information about the firmware compatibility.
Active tracking is supported only with the Ethernet camera system (Prime series or Slime 13E cameras). For instructions on how to set up a camera system see: .
For more information, please read through the page.
When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the .
With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the . To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.
1. [Camera Setup]
2. [Camera Setup]
3. [Camera Setup]
1. [Camera Settings]
2. [Camera Settings]
This tutorial requires Motive 2.3.x, Unreal Engine 4.27, and the .
The OptiTrack system is used to track the camera, calibration checkerboard, (optional) LED Wall, and (optional) any other props or additional cameras. As far as OptiTrack hardware is concerned, you will need all of the typical hardware for a motion capture system plus an eSync2, BaseStation, CinePuck, Probe, and a few extra markers. Please refer to the for instructions on how to do this.
(What we use internally)
We assume that you have already set up and calibrated Motive before starting this video. If you need help getting started with Motive, then please refer to our wiki page.
If you don’t have this information, then consult the IMU tag instructions found here .
Before configuring the LED Wall and Calibration Board, you'll first want to create a probe Rigid Body. The probe can be used to measure locations in the volume using the calibrated position of the metal tip. For more information for using the probe measurement tool, please feel free to visit our wiki page .\
If the plane does not perfectly align with the probe points, then you will need to use the gizmo tool to align the geometry. If you need help setting up or using the Gizmo tool please visit our other wiki page .
Viewport
When the color of Rigid Body is the same as the assigned Rigid Body color, it indicates Motive is connected to the IMU and receiving data.
If the color is orange, it indicate the IMU is attempting to calibrate. Slowly rotate the object until the IMU finishes calibrating.
If the color is red, it indicates the Rigid Body is configured for receiving IMU data, but no data is coming through the designated RF channel. Make sure Active Tag ID and RF channel values mat the configuration on the active Tag/Puck.
Description
Before setting up a motion capture system, choose a suitable setup area and prepare it in order to achieve the best tracking performance. This page highlights some of the considerations to make when preparing the setup area for general tracking applications. Note that this page provides just general recommendations and these could vary depending on the size of a system or purpose of the capture.
First of all, pick a place to set up the capture volume.
Setup Area Size
Make sure there is plenty of room for setting up the cameras. It is usually beneficial to have extra space in case the system setup needs to be altered. Also, pick an area where there is enough vertical spacing as well. Setting up the cameras at a high elevation is beneficial because it gives wider lines of sight for the cameras, providing a better coverage of the capture volume.
Minimal Foot Traffic
After camera system calibration, the system should remain unaltered in order to maintain the calibration quality. Physical contacts on cameras could change the setup, requiring it to be re-calibrated. To prevent such cases, pick a space where there is only minimal foot traffic.
Flooring
Avoid reflective flooring. The IR lights from the cameras could be reflected by it and interfere with tracking. If this is inevitable, consider covering the floor with surface mats to prevent the reflections.
Avoid flexible or deformable flooring; such flooring can negatively impact your system's calibration.
For the best tracking performance, minimize ambient light interference within the setup area. The motion capture cameras track the markers by detecting reflected infrared light and any extraneous IR lights that exist within the capture volume could interfere with the tracking.
Sunlight: Block any open windows that might let sunlight in. Sunlight contains wavelength within the IR spectrum and could interfere with the cameras.
IR Light sources: Remove any unnecessary lights in IR wavelength range from the capture volume. IR lights could be emitted from sources such as incandescent, halogen, and high-pressure sodium lights or any other IR based devices.
Dark-colored objects absorb most of the visible light, however, it does not mean that they absorb the IR lights as well. Therefore, the color of the material is not a good way of determining whether an object will be visible within the IR spectrum. Some materials will look dark to human eyes but appear bright white on the IR cameras. If these items are placed within the tracking volume, they could introduce extraneous reconstructions.
Since you already have the IR cameras in hand, use one of the cameras to check whether there are IR white materials within the volume. If there are, move them out of the volume or cover them up.
Remove any unnecessary obstacles out of the capture volume since they could block cameras' view and prevent them from tracking the markers. Leave only the items that are necessary for the capture.
Remove reflective objects nearby or within the setup area since IR illumination from the cameras could be reflected by them. You can also use non-reflective tapes to cover the reflective parts.
Prime 41 and Prime 17W cameras are equipped with powerful IR LED rings which enables tracking outdoors, even under the presence of some extraneous IR lights. The strong illumination from the Prime 41 cameras allows a mocap system to better distinguish marker reflections from extraneous illuminations. System settings and camera placements may need to be adjusted for outdoor tracking applications.
System setup area depends on the size of the mocap system and how the cameras are positioned. To get a general idea, check out the feature on our website.
All cameras are equipped with IR filters, so extraneous lights outside of the infrared spectrum (e.g. fluorescent lights) will not interfere with the cameras. IR lights that cannot be removed or blocked from the setup area can be masked in Motive using the during the system calibration. However, this feature completely discards image data within the masked regions and an overuse of it could negatively impact tracking. Thus, it is best to physically remove the object whenever possible.
Please read through the page for more information.
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left.
If you wish to change the location and orientation of the global axis, you can use the ground plane tools from the Calibration pane and use a Rigid Body or a calibration square to set the global origin.
When using the Duo/Trio tracking bars, you can set the coordinate origin at the desired location and orientation using either a Rigid Body or a calibration square as a reference point. Using a calibration square will allow you to set the origin more accurately. You can also use a custom calibration square to set this.
Adjustig the Coordinate System Steps
First set place the calibration square at the desired origin. If you are using a Rigid Body, its pivot point position and orientation will be used as the reference.
[Motive] Open the Calibration pane.
[Motive] Open the Ground Planes page.
[Motive] Select the type of calibration square that will be used as a reference to set the global origin. Set it to Auto if you are using a calibration square from us. If you are using a Rigid Body, select the Rigid Body option from the drop-down menu. If you are using a custom calibration square, you will need to set the vertical offset also.
[Motive] Select the Calibration square markers or the Rigid Body markers from the Perspective View pane
[Motive] Click Set Set Ground Plane button, and the global origin will be adjusted.
Download the Motive 3.1 software installer from the Motive Download Page to each host PC.
Run the installer and follow its prompts.
Each V120:Duo and V120:Trio includes a free license to Motive:Tracker for one device. No software license activation or security key is required.
To use multiple V120 devices, connect each one to a separate host PC with Motive installed.
Please see the Host PC Requirements section of the Installation and Activation page for computer specifications.
V120 Duo or Trio device
I/O-X (breakout box)
Power adapter and cord
Camera bar cable (attached to I/O-X)
USB Uplink cable
Mount the camera bar in the designated location.
Connect the Camera Bar Cable to the back of the camera and to the I/O-X device, as shown in the diagram above.
Connect the I/O-X device to the PC using the USB uplink cable.
Connect the power cable to the I/O-X device and plug it into a power source.
Make sure the power is disconnected from the I/O-X (breakout box) before plugging or unplugging the Camera Bar Cable. Hot-plugging this cable may damage the device.
The V120 cameras use a preset frequency for timing and can run at 25 Hz, 50 Hz or 100 Hz. To synchronize other devices with the Duo or Trio, use a BNC cable to connect an input port on the receiving device to the Sync Out port on the I/O-X device.
Output options are set in the Properties pane. Select T-Bar Sync in the Devices pane to change output options:
Exposure Time: Sends a high signal based on when the camera exposes.
Passthrough: Sync In signal is passed through to the output port.
Recording Gate: Low electrical signal (0V) when not recording and a high (3.3V) signal when recording is in progress.
Gated Exposure Time: ends a high signal based on when the camera exposes, only while recording is in progress.
Timing signals from other devices can be attached to the V120 using the I/O-X device's Sync In port and a BNC cable. However, this port does not allow you to change the rate of the device reliably. The only functionality that may work is passing the data through to the output port.
The Sync In port cannot be used to change the camera's frequency reliably.
The V120 ships with a free license for Motive:Tracker installed.
The cameras are pre-calibrated and no wanding is required. The user can set the ground plane.
The V120 runs in Precision, Grayscale, and MJPEG modes. Object mode is not available.
LED lights on the back of the V120 indicate the device's status.
None
Device is off.
Red
Device is on.
Amber
Device is recognized by Motive.
None
Tracking/video is not enabled.
Solid Red
Configured for External-Sync: Sync Not Detected
Flashing Red
Configured for Default, Free Run Mode,
or External-Sync: Sync Detected
Solid Green
Configured for Internal-Sync: Sync Missing
Flashing Green
Configured for Internal-Sync: Sync Present
In optical motion capture systems, proper camera placement is very important in order to efficiently utilize the captured images from each camera. Before setting up the cameras, it is good idea to plan ahead and create a blueprint of the camera placement layout. This page highlights the key aspects and tips for efficient camera placements.
A well-arranged camera placement can significantly improve the tracking quality. When tracking markers, 3D coordinates are reconstructed from the 2D views seen by each camera in the system. More specifically, correlated 2D marker positions are triangulated to compute the 3D position of each marker. Thus, having multiple distinct vantages on the target volume is beneficial because it allows wider angles for the triangulation algorithm, which in turn improves the tracking quality. Accordingly, an efficient camera arrangement should have cameras distributed appropriately around the capture volume. By doing so, not only the tracking accuracy will be improved, but uncorrelated rays and marker occlusions will also be prevented. Depending on the type of tracking application, capture volume environment, and the size of a mocap system, proper camera placement layouts may vary.
An ideal camera placement varies depending on the capture application. In order to figure out the best placements for a specific application, a clear understanding of the fundamentals of optical motion capture is necessary.
To calculate 3D marker locations, tracked markers must be simultaneously captured by at least two synchronized cameras in the system. When not enough cameras are capturing the 2D positions, the 3D marker will not be present in the captured data. As a result, the collected marker trajectory will have gaps, and the accuracy of the capture will be reduced. Furthermore, extra effort and time will be required for post-processing the data. Thus, marker visibility throughout the capture is very important for tracking quality, and cameras need to be capturing at diverse vantages so that marker occlusions are minimized.
Depending on captured motion types and volume settings, the instructions for ideal camera arrangement vary. For applications that require tracking markers at low heights, it would be beneficial to have some cameras placed and aimed at low elevations. For applications tracking markers placed strictly on the front of the subject, cameras on the rear won't see those and as a result, become unnecessary. For large volume setups, installing cameras circumnavigating the volume at the highest elevation will maximize camera coverage and the capture volume size. For captures valuing extreme accuracy, it is better to place cameras close to the object so that cameras capture more pixels per marker and more accurately track small changes in their position.
Again, the optimal camera arrangement depends on the purpose and features of the capture application. Plan the camera placement specific to the capture application so that the capability of the provided system is fully utilized. Please contact us if you need consulting with figuring out the optimal camera arrangement.
For common applications of tracking 3D position and orientation of Skeletons and Rigid Bodies, place the cameras on the periphery of the capture volume. This setup typically maximizes the camera overlap and minimizes wasted camera coverage. General tips include the following:
Mount cameras at the desired maximum height of the capture volume.
Distribute the cameras equidistantly around the setup area.
Adjust angles of cameras and aim them towards the target volume.
For cameras with rectangular FOVs, mount the cameras in landscape orientation. In very small setup areas, cameras can be aimed in portrait orientation to increase vertical coverage, but this typically reduces camera overlap, which can reduce marker continuity and data quality.
TIP: For capture setups involving large camera counts, it is useful to separate the capture volume into two or more sections. This reduces amount of computation load for the software.
Around the volume
For common applications tracking a Skeleton or a Rigid Body to obtain the 6 Degrees of Freedom (x,y,z-position and orientation) data, it is beneficial to arrange the cameras around the periphery of the capture volume for tracking markers both in front and back of the subject.
Camera Elevations
For typical motion capture setup, placing cameras at high elevations is recommended. Doing so maximizes the capture coverage in the volume, and also minimizes the chance of subjects bumping into the truss structure which can degrade calibration. Furthermore, when cameras are placed at low elevations and aimed across from one another, the synchronized IR illuminations from each camera will be detected, and will need to be masked from the 2D view.
However, it can be beneficial to place cameras at varying elevations. Doing so will provide more diverse viewing angles from both high and low elevations and can significantly increase the coverage of the volume. The frequency of marker occlusions will be reduced, and the accuracy of detecting the marker elevations will be improved.
Camera to Camera Distance
Separating every camera by a consistent distance is recommended. When cameras are placed in close vicinity, they capture similar images on the tracked subject, and the extra image will not contribute to preventing occlusions or the reconstruction calculations. This overlap detracts from the benefit of a higher camera count and also doubles the computational load for the calibration process. Moreover, this also increases the chance of marker occlusions because markers will be blocked from multiple views simultaneously whenever obstacles are introduced.
Camera to Object Distance
An ideal distance between a camera and the captured subject also depends on the purpose of the capture. A long distance between the camera and the object gives more camera coverage for larger volume setups. On the other hand, capturing at a short distance will have less camera coverage but the tracking measurements will be more accurate. The cameras lens focus ring may need to be adjusted for close-up tracking applications.