Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.
Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
Minimize ambient lights, especially sunlight and other infrared light sources.
Clean capture volume. Remove unnecessary obstacles within the area.
Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
USB Cables: Keep USB cable length restrictions in mind, each USB 2.0 cable must not exceed 5 meters in length.
Connect the OptiHub(s) into a Host PC: Use USB 2.0 cables (type A/B) to connect each OptiHub into a host PC. To optimize available bandwidth, evenly split the OptiHub connections between different USB adapters of the host PC. For large system setups, up to two 5 meters active USB extensions can be used for connecting an OptiHub, providing total 15 meters in length.
Power the Optihub: Use provided power adapters to connect each OptiHub into an external power. All USB cameras will be powered by the OptiHub(s).
Connect the Cameras into the OptiHub(s): Use USB 2.0 cables (type B/mini-b) to connect each USB camera into an OptiHub. When using multiple OptiHubs, evenly distribute the camera connections among the OptiHubs in order to balance the processing load. Note that USB extensions are not supported when connecting a camera into an OptiHub.
Multiple OptiHubs: Up to four OptiHubs, 24 USB cameras, can be used in one system. When setting up multiple OptiHubs, all OptiHubs must be connected, or cascaded, in a series chain with RCA synchronization cables. More specifically, a Hub SYNC Out port of one OptiHub needs to be connected into a Hub Sync In port of another OptiHub, as shown in the diagram.
External Sync: When integrating external devices, use the External Sync In/Out ports that are available on each OptiHub.
Duo/Trio Tracking bars uses the I/O-X USB hub for powering the device (3.0 A), connecting to the computer (USB A-B), and synchronizing with external devices.
See Also: Network setup page.
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require re-calibration. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
See Also: Camera Placement and Camera Mount Structures pages.
In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.
To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.
See Also: Aiming and Focusing page.
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.
If you are new to Motive, we recommend you to read through Motive Basics page after going through this guide to learn about basic navigation controls in Motive.
Motive Activation Requirements
The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.
Motive 2.x license
USB Hardware Key
Host PC Requirements
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
The first time Motive 2.3.x is installed on a computer, the following software also needs to be installed:
Microsoft Visual C++ Redistributables 2013 and 2015
Microsoft DirectX 9c
OptiTrack USB Drivers
It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.
License Activation Steps
Insert the USB Hardware Key into a USB-A port on the computer. If needed, you can also use a USB-A adapter to connect.
Launch Motive
Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.
Notes on using USB Hardware Key
When connecting the USB Hardware Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.
When you first launch Motive, the Quick Start panel will show up, and you can use this panel to quickly get started on specific tasks. By default, Motive will start on the Calibration Layout. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. Note that the initial layout may be slightly different for different camera models or software licenses.
The following table briefly explains purposes of some of the panels on the initial layout:
See Also: List of UI pages from the Motive section of the wiki.
Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the Devices pane page.
To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the Aim Assist button for supported cameras. In Motive, you can right-click on the Cameras Viewport and switch the video mode in the context menu, or you can also change the video mode through the Properties pane.
Exposure Setting
The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.
Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.
In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. Read through the Calibration page to learn about the calibration process and what other tools are available for more efficient workflows.
See Also: Calibration page.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Masking
Remove any unwanted objects and physically cover any extraneous IR light reflections or interferences within the capture volume.
[Motive:Calibration pane] In Motive, open the Calibration pane or use the calibration layout (CTRL + 1).
Wanding
Bring out the calibration wand.
[Motive:Calibration pane] From the Calibration pane, make sure the Calibration Type is set to Full and the correct type of the wand is specified under the OptiWand section.
[Motive:Calibration pane] Click Start Wanding to begin wanding.
Bring the wand into the capture volume, and wave the wand throughout the volume and allow cameras to collect wanding samples.
[Motive:Calibration pane] When the system indicates enough samples have been collected, click the Calculate button to begin the calculation. This may take few minutes.
[Motive:Calibration pane] When the Ready to Apply button becomes enabled, click Apply Result.
[Motive] Calibration results window will be displayed. After examining the wanding result, click Apply to apply the calibration.
Wanding tips
For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.
During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.
Setting the Ground Plane
Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.
Now that all of the cameras have been calibrated, you need to define the ground plane of the capture volume.
Place a calibration square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
Level the calibration square parallel to the ground plane.
(Optional) In the 3D view in Motive, select the calibration square markers. If retro-reflective markers on the calibration square are the only reconstructions within the capture volume, Motive will automatically detect the markers.
Access the Ground Plane tab in the Calibration pane.
While the calibration square markers are selected, click Set Ground Plane from the Ground Plane Calibration Square section.
Motive will prompt you to save the calibration file. Save the file to the corresponding session folder.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.
Motive Recordings
See Also: Motive Basics page.
Motive Profiles
Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.
Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Builder pane, go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
See Also: Markers page for marker types, or Rigid Body Tracking and Skeleton Tracking page for placement directions.
Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.
Create Rigid Body
To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the Builder pane to define the Rigid Body.
Create Skeleton
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the Builder pane and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Builder pane. In some cases, you may not need to select the markers if only the desired actor is in view.
See Also: Rigid Body Tracking page and Skeleton Tracking page.
Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Control Deck at the bottom, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout
General Editing Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.
Manual Label: Labeling individual markers manually using the Labeling, assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.
See Also: Labeling page.
Changing Marker Labels and Colors
When needed, you can use the Marker Sets pane to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Data pane. Multiple Takes can be selected and exported from Motive or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Graph View pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 5, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
See Also: Data Streaming page
Recommended | Minimum |
---|---|
UI Name | Description |
---|---|
Function | Default Control |
---|---|
[Motive:Calibration pane] Click the button from the Camera Preview pane.
[Motive:Calibration pane] Mask the remaining extraneous reflections using Motive. Click Block Visible from the Calibration pane, or use the icon in the Camera Preview pane, to apply software masking to automatically block any light sources or reflections that cannot be removed from the volume. Once the maskings are applied, all of the extraneous reflections (white) in the 2D Camera Preview pane will be covered with red pixels.
Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the Data pane by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers
OS: Windows 10, 11 (64-bit)
CPU: Intel i7
RAM: 4GB of memory
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle selection in View
CTRL + left mouse click
Quick Start Panel
The quick start panel provides quick access to typical initial actions when using Motive. Each option will quickly lead you to the layouts and actions for corresponding selection. If you wish not to see this panel again, you can uncheck the box at the bottom. This panel can be re-accessed under the Help tab.
Devices pane
Properties pane
Perspective View pane
Camera Preview pane
Calibration pane
Control Deck
The Control Deck, located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).
Connected cameras will be listed under the Devices pane. This panel is where you configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in reference mode capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
When an item is selected in Motive, all of its related properties will be listed under the Properties pane. For an example, if you have selected a skeleton in the 3D viewport, its corresponding properties will get listed under this pane, and you can view the settings and configure them as needed. You can also select connected cameras, sync devices, rigid bodies, any external devices listed in the Device pane, or recorded Takes to view and configure their properties. This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
The Perspective View pane is where 3D data is displayed in Motive. Here, you can view, analyze, and select reconstructed 3D coordinates within a calibrated capture volume. This panel can be used both in live capture and recorded data playback. You can also select multiple markers and define rigid bodies and skeleton assets. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.
The Camera Preview pane shows 2D views of cameras in a system. Here you can monitor each camera view and apply mask filters. This pane is also used to examine 2D objects (circular reflections) that are captured, or filtered, in order to examine what reflections are processed and reconstructed into 3D coordinates. If desired, additional view panes can be opened under the View tab or by clicking icons on the main toolbar.
The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the View tab or by clicking icon on the main toolbar.