Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Following the Motive 3.0.2 release, an internet connection is no longer required for initial use of Motive. If you are currently using Motive 3.0.1 or older, please install this new release from our Software webpage. Please note that an internet connection is still required to download Motive.exe from the OptiTrack website.
Important Update:
New licensing system in Motive 3. Please check the OptiTrack website for details on Motive licenses.
Security Key (Motive 3.x): Starting from version 3.0, a USB Security Key will be required to use Motive. USB Hardware Keys that were used for activating older versions of Motive will no longer work with 3.0, and they will need to be replaced with the USB Security key. For any questions, please contact us.
Hardware Key (Motive 2.x or below): Motive 2.x versions will still require USB Hardware Key.
USB Cameras
USB cameras, including Flex series, tracking bars, and Slim3U, cameras are not supported in 3.x versions currently. For USB camera systems, please use Motive 2.x versions. Go to Motive 2.3 documentation.
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
To install Motive, you must first download the Motive installer from our website. Follow the Downloads link under the Support page (http://optitrack.com/downloads/), and you will be able to find the newest version of Motive or the previous releases if needed. Both Motive: Body and Motive: Tracker use the same software installer.
1. Run the Installer
When the download is complete, run the installer to initiate the installation process.
2. Install the USB Driver and Dependencies
If you are installing Motive for the first time, it will prompt you to install the OptiTrack USB Driver. This driver is required for all OptiTrack USB devices including the Security Key. You may also need to install other dependencies such as the C++ redistributable. After all dependencies have been installed, continue onto installing Motive.
3. Install Motive
Follow the installation prompts and install Motive in your desired file directory. We recommend installing the software in the default directory, C:\Program File\OptiTrack\Motive
.
4. OptiTrack Peripheral Module
At the Custom Setup section of the installation process, you will be asked to choose whether to install the Peripheral Devices along with Motive. If you plan to use force plate, NI-DAQ, or EMG devices along with the motion capture system, then make sure the Peripheral Device is installed. If you are not going to be using these devices, you may skip to the next step.
Peripheral Module NI-DAQ
If you decided to install the Peripheral Device, then you will be prompted to install the OptiTrack Peripherals Module along with NI-DAQmx driver at the end of the Motive installation. Press Yes to install the plugins and the NI-DAQmx driver. This may take a few minutes to install. This only needs to be done one time.
5. Finish Installation
After you have completed all the steps above, Motive will be installed. If you want to use additional plugins, visit the downloads page.
Firewall / Anti-Virus
Make sure all antivirus software on the Host PC allows Motive.
For Ethernet cameras, make sure the windows firewall is configured to allow the camera network to be recognized. Disabling the firewall entirely is another option.
During installation, some antivirus programs (i.e. BitDefender and McAfee) may block Motive from being downloaded. Our software directly downloaded from OptiTrack.com/downloads is safe for use and will not harm your computer. If an antivirus program allows Motive to download, but you're still unable to view cameras in the Devices pane, or you are seeing frame/data drops, you'll need to reverify that your antivirus or firewall settings are allowing all traffic from your camera network to Motive and vice versa. In some rare cases with some antivirus software, you may need to completely uninstall the antivirus software if it continues to interfere with camera communication.
High-Performance
Windows power saving mode limits CPU usage. In order to best utilize Motive, set this mode to the High Performance mode and remove the limitations. You can configure the High Performance Mode from Control Panel → Hardware and Sound → Power Options
as shown in the image below.
Graphics Card Settings
This is only for computers with integrated graphics.
For computers with integrated graphics, please make sure Motive is set to run on the dedicated graphics card. If the host computer has integrated graphics on the CPU, the PC may switch to using integrated graphics when the computer goes to sleep mode, and when this happens, the viewport may go unresponsive when it exits out of sleep mode. If you have integrated graphics on the computer, go to the Graphics Settings on Windows, and browse Motive to set it as high-performance graphics.
Once you have installed Motive, the next step is to activate the software using the provided license information and a USB Security Key. Motive activation requires a valid Motive 3.0 license, a USB Security Key, and a computer with USB C ports or an adapter for USB A to USB C.
For Motive 3.0 and above, a USB Security Key is required to use the camera system. This key is different from the previous Hardware Key and it improves the security of the camera system. The Security Keys will need to be purchased separately. For more information, please refer to the following page:
There are five different types of Motive licenses: Motive:Body-Unlimited, Motive:Body, Motive:Tracker, Motive:Edit-Unlimited, and Motive:Edit. Each license unlocks different features in the software depending on the use case that the license is intended to facilitate.
The Motive:Body and Motive:Body-Unlimited licenses are intended for either small (up to 3) or large-scale Skeleton tracking applications.
The Motive:Tracker license is intended for real-time Rigid Body tracking applications.
The Motive:Edit and Motive:Edit Unlimited licenses are intended for users modifying data after it has been captured already.
For more information on different types of Motive licenses, check the software comparison table on our website or in the table below.
Step 1. Launch Motive
First, launch Motive.
Step 2. Activate
The Motive splash screen will pop up and it will indicate that the license is not found. Click to open the license tool and fill out the following fields using provided license information. You will need the License Serial Number and License Hash from your order invoice and the Hardware Key Serial Number indicated on the USB security key or the hardware key. Once you have entered all the information, click Activate. If you have already activated the license before on another machine, make sure the same name is entered when activating.
Online Activation Tool
The Motive License can also be activated from online using the Online License Activation tool. When you use the online License Activation Tool, you will receive the license file via email. In this case, you will have to place the file in the license folder. Once the license file is placed, insert the corresponding USB Hardware Key to use Motive.
Step 3. License File
If Motive is activated properly, license files will be placed in the license folder. This folder can be accessed from the splash screen or by navigating to Start Menu → All Programs → OptiTrack → OptiTrack License Folder
.
License Folder: C:\ProgramData\OptiTrack\License
Step 4. Security Key
If not already done, insert the corresponding Security Key that was used to activate the license. The matching security key must be connected to the computer in order to use Motive.
Notes on Connecting the Security Key
Connect the Security Key to a USB port where the USB bus does not have a lot of traffic. This is important especially if you have other peripheral devices that connect to the computer via USB ports. If there is too much data flowing through the USB bus used by the Security Key, Motive might not be able to connect the cameras.
Make sure the USB Hardware Key is unplugged. If both the Hardware Key and the Security Key are plugged into the same computer, Motive may not activate properly.
About Motive
You can also check the status of the activated license from the About Motive pop-up. This can be accessed in the splash screen when it fails to detect a valid license, or it can be accessed from the Help
``→``
About Motive
menu in Motive.
License Data:
In this panel, you can also export license data into a TXT file by clicking on the License Data.... If you are having any issues with activating Motive, please export and attach the license data file in the email.
OptiTrack software can be used on a new computer by reactivating the license, using the same license information. When reactivating, make sure to enter the same name information as before. After the license has been reactivated, the corresponding USB Security Key needs to be inserted into the PC in order to verify and run the software.
Another method of using the license is by copying the license file from the old computer to the new computer. The license file can be found in the OptiTrack License folder which can be accessed through the Motive Splash Screen or top Help menu in Motive.
For more information on licensing of Motive, refer to the Licensing FAQs from the OptiTrack website:
For more questions, contact our Support:
When contacting support, please attach the license data (TXT) file exported from the About Motive panel as a reference.
Recommended | Minimum |
---|---|
License | Motive Edit | Motive Edit Unlimited | Motive Tracker | Motive Body | Motive Body Unlimited |
---|---|---|---|---|---|
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better, running at 3 GHz or greater
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers and support for OpenGL 3.2+
OS: Windows 10, 11 (64-bit)
CPU: Intel i7, 3 GHz
RAM: 4GB of memory
GPU that supports OpenGL 3.2+
Live Rigid Bodies
0
0
Unlimited
Unlimited
Unlimited
Live Skeletons
0
0
0
Up to 3
Unlimited
Edit Rigid Bodies
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Edit Skeletons
Up to 3
Unlimited
0
Up to 3
Unlimited
This page provides detailed information on the continuous calibration feature, which can be enabled from the Calibration pane.
The Continuous Calibration feature ensures your system always remains optimally calibrated, requiring no user intervention to maintain the tracking quality. It uses highly sophisticated algorithms to evaluate the quality of the calibration and the triangulated marker positions. Whenever the tracking accuracy degrades, Motive will automatically detect and update the calibration to provide the most globally optimized tracking system.
Ease of use. This feature provides much easier user experience because the capture volume will not have to be re-calibrated as often, which will save a lot of time. You can simply enable this feature and have Motive maintain the calibration quality.
Optimal tracking quality. Always maintains the best tracking solution for live camera systems. This ensures that your captured sessions retain the highest quality calibration. If the system receives inadequate information from the environment, the calibration with not update and your system never degrades based on sporadic or spurious data. A moderate increase in the number of real optical tracking markers in the volume and an increase in camera overlap improves the likelihood of a higher quality update.
Works with all camera types. Continuous calibration works with all OptiTrack camera models.
For continuous calibration to work as expected, the following criteria must be met:
Live Mode Only. Continuous calibration only works in Live mode.
Markers Must Be Tracked. Continuous calibration looks at tracked reconstructions to assess and update the calibration. Therefore, at least some number of markers must be tracked within the volume.
Majority of Cameras Must See Markers. A majority of cameras in a volume needs to receive some tracking data within a portion of their field of view in order to initiate the calibration process. Because of this, traditional perimeter camera systems typically work the best. Each camera should additionally see at least 4 markers for optimal calibration. If not all the cameras see the markers at the same time, anchor markers will need to be set up to improve the calibration updates.
To enable Continuous Calibration, calibrate the camera system first and enable the Continuous Calibration setting at the bottom of the Calibration pane. Once enabled, Motive continuously monitors the residual values in captured marker reconstructions, and when the updated calibration is better than the existing one, it will get updated automatically. Please note that at least four (default) marker samples must be being tracked in the volume for the continuous calibration to work. You will also be able to monitor the sampling progress and when the calibration has been last updated.
Anchor markers can be set up in Motive to further improve continuous calibration. When properly configured, anchor markers improve continuous calibration updates, especially on systems that consists of multiple sets of cameras that are separated into different tracking areas, by obstructions or walls, without camera view overlap. It also provides extra assurance that the global origin will not shift during each updates; although the continuous calibration feature itself already checks for this.
Follow the steps below for setting up the anchor marker in Motive:
Adding Anchor Markers in Motive
First, make sure the entire camera volume is fully calibrated and prepared for marker tracking.
Place any number of markers in the volume to assign them as the anchor markers.
Make sure these markers are securely fixed in place within the volume. It's important that the distances between these markers do not change throughout the continuous calibration updates.
Open the Calibration pane and select the second page at the bottom to access the anchor marker feature.
In the 3D viewport, select the markers that are going to be assigned as anchors.
Click on Add to add the selected markers as anchor markers.
Once markers are added as anchor markers, magenta spheres will appear around the markers indicating the anchors have been set.
Add more anchors as needed, again, it's important that these anchor markers do not move throughout the tracking. Also when the anchor markers need to be reset, whether if the marker was displaced, you can clear the anchor markers and reassign them.
For multi-room setups, it is useful to group cameras into partitions. This allows for Continuous Calibration to run in each individual room without the need for camera view overlap.
From the Properties pane of a camera you can assign a Partition ID from the advanced settings.
You'll want to assign all the cameras in the same room the same Partition ID. Once assigned these cameras will all contribute to Continuous Calibration for their particular space. This will help ensure the accuracy of Continuous Calibration for each individual space that is a part of the whole system.
In the event that you need to manually adjust cameras in the 3D view, you can enable Editable in 3D View in General Settings. To access this setting, you'll need to select Show Advanced from the 3-dot more options dropdown at the top. This will populate a Calibration section on this window.
This allows you to use the Gizmo Tools to Translate, Rotate, and Scale cameras to their desired locations.
For a full list of Log pane Continuous Calibration statuses, please see the Log pane page.
This notice indicates the need for more markers to be visible by a particular camera. For instance, if camera 2 is not seeing enough markers in its camera view, the Log pane will inform you that you need more markers for that particular camera.
This indicates the need for more markers to be spread in more areas of the camera view.
Once Motive is prepared, the next step is to place markers on the subject and create corresponding assets. There are three different types of assets in Motive:
Marker Set
Rigid Body
Skeleton
For each Take, involved assets are displayed in the Assets pane, and the related properties show up at the Properties pane when an asset is selected within Motive.
The Marker Set is a list of marker labels that are used to annotate reconstructed markers. Marker Sets should only be used in situations where it is not possible to define a Rigid Body or Skeleton. In this case, the user will manually label markers in post-processing. When doing so, having a defined set of labels (Marker Set) makes this process much easier. Marker Sets within a Take will be listed in the Labels pane, and each label can be assigned through the Labeling process.
Rigid body and Skeleton assets are the Tracking Models. Rigid bodies are created for tracking rigid objects, and Skeleton assets are created for tracking human motions. These assets automatically apply a set of predefined labels to reconstructed trajectories using Motive's tracking and labeling algorithms, and Motive uses the labeled markers to calculate the position and orientation of the Rigid Body or Skeleton Segment. Both Rigid Body and Skeleton tracking data can be sent to other pipelines (e.g. animations and biomechanics) for extended applications. If new Skeletons or Rigid Bodies are created during post-processing, the take will need to be reconstructed and auto-labeled in order to apply the changes to the 3D data.
Assets may be created during both Live (before capture) or Post (after capture, from a loaded TAK) captures.
The Assets pane lists out all assets that are available in the current capture. You can easily copy these assets onto other recorded Take(s) or to the live capture by doing the following:
Copying Assets to a Recorded _Take_
In order to copy and paste assets onto another Take, right-click on the desired Take to bring up the context menu and choose Copy Assets to Takes. This will bring up a dialog window for selecting which assets to move.
Copying Assets to Multiple Recorded _Take(s)_
If you wish to copy assets to multiple Takes, select multiple takes from the Data pane until the desired takes are all highlighted. Repeat the steps you took above for copying a single Take by right-clicking on any of the selected Takes. This should copy the assets you selected to all the selected Takes in the Data pane.
Copying Assets from a Recorded _Take_** to the Live Capture**
If you have a list of assets in a Take that you wish to import into the live capture, you can simply do this by right-clicking on the desired assets on the Assets pane, and selecting Copy Assets to Live.
For selecting multiple items, use Shift-click or Ctrl-click.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
Before diving into specific details, let’s begin with a brief overview of Motive. If you are new to using Motive, we recommend you to read through this page and learn about the basic tools, configurations and navigation controls, as well as instructions on managing capture files.
Motive will save and load Motive-specific file formats including the Take files (TAK), camera calibration files (CAL), and Motive user profiles (MOTIVE) that can contain most of the software settings as well as asset definitions for Skeletons and Rigid Body objects. Asset definitions are related to trackable objects in Motive which will be explained further in the Rigid Body Tracking and Skeleton Tracking page.
Motive file management is centered on the Take (TAK) file. A TAK file is a single motion capture recording (aka 'take' or 'trial'), which contains all the information necessary to recreate the entire capture from the file, including camera calibration, camera 2D data, reconstructed and labeled 3D data, data edits, solved joint angle data, tracking models (Skeletons, Rigid Bodies), and any additional device data (audio, force plate, etc.). A Motive Take (TAK) file is a completely self-contained motion capture recording, and it can be opened by another copy of Motive on another system.
Take files are forward compatible, but not backwards compatible. Meaning, if you record in Motive 3.x and try and play it back in Motive 2.x, Motive will throw an error. You can, however, record a Motive 2.x take and play it back in Motive 3.x.
If you have any old recordings from Motive 1.7 or below, with BAK file extension, please import these recordings into Motive 2.0 version first and re-save them into TAK file format in order to use it in Motive version 3.0 or above.
A Session is a file folder that allows the user to organize multiple similar takes (e.g. Monday, Tuesday, Wednesday, or StaticTrials, WalkingTrials, RunningTrials, etc). Whether you are planning the day's shoot or incorporating a group of Takes mid-project, creating session folders can help manage complex sets of data. In the Data pane, you can import session folders that contain multiple Takes or create a new folder to start a new capture session. For a most efficient workflow, plan the mocap session before the capture and organize a list of captures (shots) that need to be completed. Type Take names in a spreadsheet or a text file, and copy and paste the list, which will automatically create empty Takes (shot list) with corresponding names from the pasted list.
Please refer to the Session Folders section of the Data pane page for more information on working with these folders.
Software configurations are saved onto the motive profile (*.motive) files. In the motive profile, all of the application-related configurations, lists of assets, and the loaded session folders are saved and preserved. You can export and import the profiles to easily maintain the same software configurations each time Motive is launched.
All of the currently configured software settings will get saved onto the C:\ProgramData\OptiTrack\MotiveProfile.motive
file periodically throughout capture and when closing out of Motive. This file is the default application profile, and it gets loaded back when Motive is launched again. This allows all of the configurations to be persisted in between different sessions of Motive. If you wish to revert all of the settings to its factory default, use the Reset Application Settings button under the Edit tab of the main command bar.
Motive profiles can also be exported and imported from the File menu of the main command bar. Using the profiles, you can easily transfer and persist Motive configurations among different instances and different computers.
The followings are saved on application profile:
Application Settings
Live Pipeline Settings
Streaming Settings
Synchronization Settings
Export Settings
Rigid Body & Skeleton assets
Rigid Body & Skeleton settings
Labeling settings
Hotkey configurations
A calibration file is a standalone file that contains all of the required information to completely restore a calibrated camera volume, including positions and orientations of each camera, lens distortion parameters, and the camera settings. After a camera system is calibrated, CAL file can be exported and imported back again onto Motive when needed. Thus, it is recommended to save out the camera calibration file after each round of calibration.
Please note that reconstruction settings also get stored in the calibration file; just like how it gets stored in the MOTIVE profile. If the calibration file is imported after the profile file was loaded, it may overwrite the previous reconstruction settings as it gets imported.
Note that this file is reliable only if the camera setup has remained unchanged since the calibration. Read more from Calibration page.
The followings are saved on application profile:
Reconstruction settings
Camera settings
Position and orientation of the cameras
Location of the global origin
Lens distortion of each camera
Default System Calibration
The default system calibration gets saved onto the C:\ProgramData\OptiTrack\Motive\System Calibration.cal
file, and it gets loaded automatically at application startup to provide instant access to the 3D volume. This file also gets updated each time calibration is modified or when closing out of Motive.
In Motive, the main viewport is fixed at the center of the UI and is used for monitoring the 2D or 3D capture data in both live capture and playback of recorded data. The viewport can be set to either perspective view or camera view. The Perspective View mode shows the reconstructed 3D data within the calibrated 3D space, and the Camera View mode shows 2D images from each camera in the setup. These modes can be selected from the drop-down menu at the top-right corner, and both of these views are essential for assessing and monitoring the tracking data.
Use the dropdown menu at the top-left corner to switch into the Perspective View mode. You can also use the number 1 hotkey while on a viewport.
Used to look through the reconstructed 3D representation of the capture, analyze marker positions, rays used in reconstruction, etc.
The context menu in the Perspective View allows you to access more options related to the markers and assets in 3D tracking data.
Use the dropdown menu at the top-left corner to switch into the Camera View mode. You can also use the number 2 hotkey while on a viewport.
Each camera’s view can be accessed from the Camera Preview pane. It displays the images that are being transmitted from each camera. The image processing modes are displayed, including grayscale and object.
Detected IR lights and/or reflections are also shown in this pane. Only the IR lights that satisfy the object filters get considered as markers.
From the Camera Preview pane, you can mask certain pixel regions to exclude them from the process.
When needed, the viewport can be split into 4 different smaller views. This can be selected from the menu at the top-right corner of the viewport. You can use the hotkeys (Shift + 4) to do this also.
Most of the navigation controls in Motive are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Mouse controls in Motive can be customized from the application settings panel to match your preference. Motive also includes a variety of common mouse control presets so that any new users can easily start controlling Motive. Available preset control profiles include Motive, Blade, Maya, and Visual3D. The following table shows a few basics actions that are commonly used for navigating the viewports in Motive.
Using the Hotkeys can speed up workflows. Most of the default hotkeys are listed on the Motive Hotkeys page. When needed, the hotkeys can also be customized from the application settings panel which can be accessed under the Edit tab. Various actions can be assigned with a custom hotkey using the Hotkey Editor.
The Control Deck is always docked at the bottom of Motive, and it provides both recording and navigation controls over Motive's two primary operating modes: Live mode and Edit mode.
In the Live Mode, all cameras are active and the system is processing camera data. If the mocap system is already calibrated, Motive is live-reconstructing 2D camera data into labeled and unlabeled 3D trajectories (markers) in real-time. The live tracking data can be streamed to other applications using the data streaming tools or the NatNet SDK. Also, in Live mode, the system is ready for recording and corresponding capture controls will be available in the Control Deck.
In the Edit Mode, the cameras are not active, and Motive is processing loaded Take file (pre-recorded data). The playback controls will be available in the control deck, and the small timeline will appear at the top of the control deck for scrubbing through the recorded frames. In this mode, you can review the recorded 3D data from the TAK and make post-processing edits and/or manually assign marker labels to the recorded trajectories before exporting out the tracking data. Also, when needed, you can switch to the 2D mode, and view the real-time reconstructed 3D data to understand how the 3D data was obtained and perform post-processing reconstruction pipeline to re-obtain a new set of 3D data.
Hotkeys: "Shift + ~" is the default hotkey for toggling between Live and Edit modes in Motive.
The Graph View pane is used for plotting live or recorded channel data in Motive. For example, 3D coordinates of the reconstructed markers, 3D positions and orientations of Rigid Body assets, force plate data, analog data from data acquisition devices, and more can be plotted on this pane. You can switch between existing layouts or create a custom layout for plotting specific channel data.
Basic navigation controls are highlighted below. For more information, read through the Graph View pane page.
Navigate Frames (Alt + Left-click + Drag)
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Panning (Scroll-click + Drag)
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Zooming (Right-click + Drag)
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
Selecting Frame Range (Left-click + Drag)
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Navigate Frames (Left-click)
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Pan View Range
Scroll-click and drag to pan the view range range.
Frame Range Zoom
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
Working Range / Playback range
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range, recorded tracking data will be played back and shown on the graphs. This range can also be used to output a specific frame ranges when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Selection Range
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both Graph View pane as well as Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the amount of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
This pane is used for configuring application-wide settings, which include startup configurations, display options for both 2D and 3D viewports, settings for asset creation, and most importantly, live-pipeline parameters for the Solver and the 2D Filter settings for the cameras. The Cameras tab includes the 2D filter settings that basically determine which reflections gets considered as marker reflections on the camera views, and the Solver setting determines which 3D markers get reconstructed in the scene from a group of marker reflections from all of the cameras. References for the available settings are documented in the Application Settings page.
If you wish to reset the default application setting, go to Reset Application Settings under the Edit tab.
Solver Settings
Under the Solver tab, you can configure a real-time solver engine. These settings, including the trajectorizer settings, are one of the most important settings in Motive. These settings determine how 3D coordinates are acquired from the captured 2D camera images and how they are used for tracking Rigid Bodies and Skeletons. Thus, understanding these settings is very important for optimizing the system for the best tracking results.
Camera Settings
Under the Camera tab, you can configure the 2D Camera filter settings (circularity filter and size filter) as well as other display options for the cameras. The 2D Camera filter setting is one of the key settings for optimizing the capture. For most applications, the default settings work well, but it is still beneficial to understand some of the core settings in order for more efficient control over the camera system.
For more information, read through the Application Settings: Live Pipeline page and the Reconstruction and 2D Mode
The UI layout in Motive is customizable. All panes can be docked and undocked from the UI. Each pane can be positioned and organized by drag-and-drop using the on-screen docking indicators. Panes may float, dock, or stack. When stacked together, they form a tabbed window for quickly cycling through. Layouts in Motive can be saved and loaded, allowing a user to switch quickly between default and custom configurations suitable for different needs. Motive has preset layouts for Calibration, Creating a Skeleton, Capturing (Record), and Editing workflows. Custom layouts can be created, saved, and set as default from the Main Menu -> 'Layout' menu item. Quickly restore a particular layout from the Layout menu, the Layout Dropdown at the top right of the Main Menu, or via HotKeys.
Note: Layout configurations from Motive versions older than 2.0 cannot be loaded in latest versions of Motive. Please re-create and update the layouts for use.
During Calibration process, a calibration square is used to define global coordinate axes as well as the ground plane for the capture volume. Each calibration square has different vertical offset value. When defining the ground plane, Motive will recognize the square and ask user whether to change the value to the matching offset.
When creating a custom ground plane, you can use Motive to help you move the markers to create approximately 90 degree between the 3 markers. This is of course contingent on how good your calibration is, however, this will still give you a fairly accurate starting point when setting your ground plane.
For Motive 1.7 or higher, Right-Handed Coordinate System is used as the standard, across internal and exported formats and data streams. As a result, Motive 1.7 now interprets the L-Frame differently than previous releases:
OptiTrack motion capture systems can use both passive and active markers as indicators for 3D position and orientation. An appropriate marker setup is essential for both tracking the quality and reliability of captured data. All markers must be properly placed and must remain securely attached to surfaces throughout the capture. If any markers are taken off or moved, they will become unlabeled from the Marker Set and will stop contributing to the tracking of the attached object. In addition to marker placements, marker counts and specifications (sizes, circularity, and reflectivity) also influence the tracking quality. Passive (retroreflective) markers need to have well-maintained retroreflective surfaces in order to fully reflect the IR light back to the camera. Active (LED) markers must be properly configured and synchronized with the system.
OptiTrack cameras track any surfaces covered with retroreflective material, which is designed to reflect incoming light back to its source. IR light emitted from the camera is reflected by passive markers and detected by the camera’s sensor. Then, the captured reflections are used to calculate the 2D marker position, which is used by Motive to compute 3D position through reconstruction. Depending on which markers are used (size, shape, etc.) you may want to adjust the camera filter parameters from the Live Pipeline settings in Application Settings.
The size of markers affects visibility. Larger markers stand out in the camera view and can be tracked at longer distances, but they are less suitable for tracking fine movements or small objects. In contrast, smaller markers are beneficial for precise tracking (e.g. facial tracking and microvolume tracking), but have difficulty being tracked at long distances or in restricted settings and are more likely to be occluded during capture. Choose appropriate marker sizes to optimize the tracking for different applications.
If you wish to track non-spherical retroreflective surfaces, lower the Circularity value in 2D object filter in the application settings. This adjusts the circle filter threshold and non-circular reflections can also be considered as markers. However, keep in mind that this will lower the filtering threshold for extraneous reflections as well. If you wish to track non-spherical retroreflective surfaces, lower the Circularity value from the cameras tab in the application settings.
All markers need to have a well-maintained retroreflective surface. Every marker must satisfy the brightness Threshold defined from the camera properties to be recognized in Motive. Worn markers with damaged retroreflective surfaces will appear to a dimmer image in the camera view, and the tracking may be limited.
Pixel Inspector: You can analyze the brightness of pixels in each camera view by using the pixel inspector, which can be enabled from the Application Settings.
Please contact our Sales team to decide which markers will suit your needs.
OptiTrack cameras can track any surface covered with retro-reflective material. For best results, markers should be completely spherical with a smooth and clean surface. Hemispherical or flat markers (e.g. retro-reflective tape on a flat surface) can be tracked effectively from straight on, but when viewed from an angle, they will produce a less accurate centroid calculation. Hence, non-spherical markers will have a less trackable range of motion when compared to tracking fully spherical markers.
OptiTrack's active solution provides advanced tracking of IR LED markers to accomplish the best tracking results. This allows each marker to be labeled individually. Please refer to the Active Marker Tracking page for more information.
Active (LED) markers can also be tracked with OptiTrack cameras when properly configured. We recommend using OptiTrack’s Ultra Wide Angle 850nm LEDs for active LED tracking applications. If third-party LEDs are used, their illumination wavelength should be at 850nm for best results. Otherwise, light from the LED will be filtered by the band-pass filter.
If your application requires tracking LEDs outside of the 850nm wavelength, the OptiTrack camera should not be equipped with the 850nm band-pass filter, as it will cut off any illumination above or below the 850nm wavelength. An alternative solution is to use the 700nm short-pass filter (for passing illumination in the visible spectrum) and the 800nm long-pass filter (for passing illumination in the IR spectrum). If the camera is not equipped with the filter, the Filter Switcher add-on is available for purchase at our webstore. There are also other important considerations when incorporating active markers in Motive:
Place a spherical diffuser around each LED marker to increase the illumination angle. This will improve the tracking since bare LED bulbs have limited illumination angles due to their narrow beamwidth. Even with wide-angle LEDs, the lighting coverage of bare LED bulbs will be insufficient for the cameras to track the markers at an angle.
If an LED-based marker system will be strobed (to increase range, offset groups of LEDs, etc.), it is important to synchronize their strobes with the camera system. If you require a LED synchronization solution, please contact one of our Sales Engineers to learn more about OptiTrack’s RF-based LED synchronizer.
Many applications that require active LEDs for tracking (e.g. very large setups with long distances from a camera to a marker) will also require active LEDs during calibration to ensure sufficient overlap in-camera samples during the wanding process. We recommend using OptiTrack’s Wireless Active LED Calibration Wand for best results in these types of applications. Please contact one of our Sales Engineers to order this calibration accessory.
Proper marker placement is vital for quality of motion capture data because each marker on a tracked subject is used as indicators for both position and orientation. When an asset (a Rigid Body or Skeleton) is created in Motive, its unique spatial relationships of the markers are calibrated and recorded. Then, the recorded information is used to recognize the markers in the corresponding asset during the auto-labeling process. For best tracking results, when multiple subjects with a similar shape are involved in the capture, it is necessary to offset their marker placements to introduce the asymmetry and avoid the congruency.
Read more about marker placements from the Rigid Body Tracking page and the Skeleton Tracking page.
Asymmetry
Asymmetry is the key to avoiding the congruency for tracking multiple Marker Sets. When there are more than one similar marker arrangements in the volume, marker labels may be confused. Thus, it is beneficial to place segment makers — joint markers must always be placed on anatomical landmarks — in asymmetrical positions for similar Rigid Bodies and Skeletal segments. This provides a clear distinction between two similar arrangements. Furthermore, avoid placing markers in a symmetrical shape within the segment as well. For example, a perfect square marker arrangement will have ambiguous orientation and frequent mislabels may occur throughout the capture. Instead, follow the rule of thumb of placing the less critical markers in asymmetrical arrangements.
Prepare the markers and attach them on the subject, a Rigid Body or a person. Minimize extraneous reflections by covering shiny surfaces with non-reflective tapes. Then, securely attach the markers to the subject using enough adhesives suitable for the surface. There are various types of adhesives and marker bases available on our webstore for attaching the marker: Acrylic, Rubber, Skin adhesive, and Velcro. Multiple types of marker bases are also available: carbon fiber filled bases, Velcro bases, and snap-on plastic bases.
In Motive, Rigid Body assets are used for tracking rigid, unmalleable, objects. A set of markers get securely attached to tracked objects, and respective placement information gets used to identify the object and report 6 Degree of Freedom (6DoF) data. Thus, it's important that the distances between placed markers stay the same throughout the range of motion. Either passive retro-reflective markers or active LED markers can be used to define and track a Rigid Body. This page details instructions on how to create rigid bodies in Motive and other useful features associated with the assets.
A Rigid Body in Motive is a collection of three or more markers on an object that are interconnected to each other with an assumption that the tracked object is unmalleable. More specifically, it assumes that the spatial relationship among the attached markers remains unchanged and the marker-to-marker distance does not deviate beyond the allowable deflection tolerance defined under the corresponding Rigid Body properties. Otherwise, involved markers may become unlabeled. Cover any reflective surfaces on the Rigid Body with non-reflective materials, and attach the markers on the exterior of the Rigid Body where cameras can easily capture them.
Tip: If you wish to get more accurate 3D orientation data (pitch, roll, and yaw) of a Rigid Body, it is beneficial to spread markers as far as you can within the same Rigid Body. By placing the markers this way, any slight deviation in the orientation will be reflected from small changes in the position.
In a 3D space, a minimum of three coordinates is required for defining a plane using vector relationships; likewise, at least three markers are required to define a Rigid Body in Motive. Whenever possible, it is best to use 4+ markers to create a Rigid Body. Additional markers provide more 3D coordinates for computing positions and orientations of a rigid body, making overall tracking more stable and less vulnerable to marker occlusions. When any of markers are occluded, Motive can reference to other visible markers to solve for the missing data and compute position and orientation of the rigid body.
However, placing too many markers on one Rigid Body is not recommended. When too many markers are placed in close vicinity, markers may overlap on the camera view, and Motive may not resolve individual reflections. This may increase the likelihood of label-swaps during capture. Securely place a sufficient number of markers (usually less than 10) just enough to cover the main frame of the Rigid Body.
Tip: The recommended number of markers per a Rigid Body is 4 ~ 12 markers. Rigid Body cannot be created with more than 20 markers in Motive.
Within a Rigid Body asset, its markers should be placed asymmetrically because this provides a clear distinction of orientations. Avoid placing the markers in symmetrical shapes such as squares, isosceles, or equilateral triangles. Symmetrical arrangements make asset identification difficult, and they may cause the Rigid Body assets to flip during capture.
When tracking multiple objects using passive markers, it is beneficial to create unique Rigid Body assets in Motive. Specifically, you need to place retroreflective markers in a distinctive arrangement between each object, and it will allow Motive to more clearly identify the markers on each Rigid Body throughout capture. In other words, their unique, non-congruent, arrangements work as distinctive identification flags among multiple assets in Motive. This not only reduces processing loads for the Rigid Body solver, but it also improves the tracking stability. Not having unique Rigid Bodies could lead to labeling errors especially when tracking several assets with similar size and shape.
Note for Active Marker Users
If you are using OptiTrack active markers for tracking multiple Rigid Bodies, it is not required to have unique marker placements. Through the active labeling protocol, active markers can be labeled individually and multiple rigid bodies can be distinguished through uniquely assigned marker labels. Please read through Active Marker Tracking page for more information.
What Makes Rigid Bodies Unique?
The key idea of creating unique Rigid Body is to avoid geometrical congruency within multiple Rigid Bodies in Motive.
Unique Marker Arrangement. Each Rigid Body must have a unique, non-congruent, marker placement creating a unique shape when the markers are interconnected.
Unique Marker-to-Marker Distances. When tracking several objects, introducing unique shapes could be difficult. Another solution is to vary Marker-to-marker distances. This will create similar shapes with varying sizes, and make them distinctive from the others.
Unique Marker Counts Adding extra markers is another method of introducing the uniqueness. Extra markers will not only make the Rigid Bodies more distinctive, but they will also provide more options for varying the arrangements to avoid the congruency.
What Happens When Rigid Bodies Are Not Unique?
Having multiple non-unique Rigid Bodies may lead to mislabeling errors. However, in Motive, non-unique Rigid Bodies can also be tracked fairly well as long as the non-unique Rigid Bodies are continuously tracked throughout capture. Motive can refer to the trajectory history to identify and associate corresponding Rigid Bodies within different frames. In order to track non-unique Rigid Bodies, you must make sure the Properties → General Settings → Unique setting in Rigid Body Properties of the assets are set to False.
Even though it is possible to track non-unique Rigid Bodies, it is strongly recommended to make each asset unique. Tracking of multiple congruent Rigid Bodies could be lost during capture either by occlusion or by stepping outside of the capture volume. Also, when two non-unique Rigid Bodies are positioned in vicinity and overlap in the scene, their marker labels may get swapped. If this happens, additional efforts will be required for correcting the labels in post-processing of the data.
Multiple Rigid Bodies Tracking
Depending on the object, there could be limitations on marker placements and number of variations of unique placements that could be achieved. The following list provides sample methods for varying unique arrangements when tracking multiple Rigid Bodies.
1. Create Distinctive 2D Arrangements. Create distinctive, non-congruent, marker arrangements as the starting point for producing multiple variations, as shown in the examples above.
2. Vary heights. Use marker bases or posts, with different heights to introduce variations in elevation to create additional unique arrangements.
3. Vary Maximum Marker to Marker Distance. Increase or decrease the overall size of the marker arrangements.
4. Add Two (or more) Markers Lastly, if an additional variation is needed, add extra markers to introduce the uniqueness. We recommended adding at least two extra markers in case any of them is occluded.
A set of markers attached to a rigid object can be grouped and auto-labeled as a Rigid Body. This Rigid Body definition can be utilized in multiple takes to continuously auto-label the same Rigid Body markers. Motive recognizes the unique spatial relationship in the marker arrangement and automatically labels each marker to track the Rigid Body. At least three coordinates are required to define a plane in 3D space, and therefore, a minimum of three markers are essential for creating a Rigid Body.
Step 1.
Select all associated Rigid Body markers in the 3D viewport.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the Rigid Body from.
Step 3.
Click Create to define a Rigid Body asset from the selected markers.
You can also create a Rigid Body by doing the following actions while the markers are selected:
Perspective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create Rigid Body hotkey (Default: Ctrl +T).
Step 4.
Once the Rigid Body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created Rigid Body will be listed under the Assets pane.
Defining Assets in Edit mode:
If the Rigid Bodies, or skeletons, are created in the Edit mode, the corresponding Take needs to be auto-labeled. Only then, the Rigid Body markers will be labeled using the Rigid Body asset and positions and orientations will be computed for each frame. If the 3D data have not been labeled after edits on the recorded data, the asset may not be tracked.
Rigid Body properties consist of various configurations of Rigid Body assets in Motive, and they determine how Rigid Bodies are tracked and displayed in Motive. For more information on each property, read through the Properties: Rigid Body page.
Default Properties
When a Rigid Body is first created, default Rigid Body properties are applied to the newly created assets. The default creation properties are configured under the Assets section in the Application Settings panel.
Modifying Properties
Properties for existing Rigid Body assets can be changed from the Properties pane.\
You can add or remove Marker Constraints from a Rigid Body in the Constraints pane.
To add a marker you can select the marker in the Perspective view and make sure an existing Rigid Body is selected from the dropdown in the Constraints pane.
Once selected you can click the '+' in the Constraints pane to add the marker to the Rigid Body.
To remove a marker from the Rigid Body, simply select the marker in the Constraints pane and click '-'.
The pivot point of a Rigid Body is used to define both position and orientation. When a rigid body is created, its pivot point is be placed at its geometric center by default, and its orientation axis will be aligned with the global coordinate axis. To view the pivot point and the orientation in the 3D viewport, set the Bone Orientation to true under the display settings of a selected Rigid Body in the Properties pane.
As mentioned previously, the orientation axis of a Rigid Body, by default, gets aligned with the global axis when the Rigid Body was first created. After a Rigid Body is created, its orientation can be adjusted by editing the Rigid Body orientation using the Builder pane or by using the GIZMO tools as described in the next section.
There are situations where the desired pivot point location is not at the center of a Rigid Body. The location of a pivot point can be adjusted by assigning it to a marker or by translating along the Rigid Body axis (x,y,z). For most accurate pivot point location, attach a marker on the desired pivot location, set the pivot point to the marker, and apply the translation for precise adjustments. If you are adjusting the pivot point after the capture, in the Edit mode, the Take will need to be auto-labeled again to apply the changes.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Read through the Gizmo tools page for detailed information.
To translate the pivot point, access the Rigid Body editing tools in the Builder pane while the Rigid Body is selected. In the Location section, you can input the amount of translation (in mm) that you wish to apply. Note that the translation will be applied along the x/y/z of the Rigid Body orientation axis. Resetting the translation will position the pivot point at the geometric center of the Rigid Body according to its marker positions.
If you wish to reset the pivot point, simply open the Rigid Body context menu in the Perspective pane and click Reset Pivot. The location of the pivot point will be reset back to the center of the Rigid Body again.
This feature is useful when tracking a spherical object (e.g. ball). The Spherical Pivot Placement feature in the Builder pane will assume that all the Rigid Body markers are placed on the surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. To do this, select a Rigid Body, access Modify tab in the Builder pane, and click Apply from the Spherical Pivot Placement.
Rigid Body tracking data can be either outputted onto a separate file or streamed to client applications in real-time:
Captured 6 DoF Rigid Body data can be exported into CSV, FBX, or BVH files. See: Data Export
You can also use one of the streaming plugins or use NatNet client applications to receive tracking data in real-time. See: NatNet SDK
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
This feature is supported in _Live Mode_** only.**
This feature is supported in _Live Mode_** only.**
The Rigid Body refinement tool improves the accuracy of Rigid Body calculation in Motive. When a Rigid Body asset is initially created, Motive references only a single frame for defining the Rigid Body definition. The Rigid Body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the Rigid Body as well as the position and orientation of the Rigid Body itself.
Steps
Select View from the toolbar at the top, open the Builder pane.
Select the Rigid Bodies from the Type dropdown menu.
In Live mode, select an existing Rigid Body asset that you wish to refine from the Assets pane.
Hold the physical selected Rigid Body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the Rigid Body.
Click Refine in the Builder pane.
Slowly rotate the Rigid Body to collect samples at different orientations until the progress bar is full.
Once all necessary samples are collected, the Refine and Create + Refine buttons will appear again in the Builder pane and the refinements will have been applied.
This page provides detailed instructions on camera system calibration and information about the Calibration pane.
Calibration is essential for high quality optical motion capture systems. During calibration, the system computes position and orientation of each camera and amounts of distortions in captured images, and they are used constructs a 3D capture volume in Motive. This is done by observing 2D images from multiple synchronized cameras and associating the position of known calibration markers from each camera through triangulation.
Please note that if there is any change in a camera setup over the course of capture, the system must be recalibrated to accommodate for changes. Moreover, even if setups are not altered, calibration accuracy may naturally deteriorate over time due to ambient factors, such as more or less light entering the capture volume as the day progresses and fluctuation in temperature. Thus, for accurate results, it is recommended to periodically calibrate the system.
Prepare and optimize the capture volume for setting up a motion capture system.
Apply masks to ignore existing reflections in the camera view.
Collect calibration samples through the wanding process.
Review the wanding result and apply calibration.
Set the ground plane to complete the system calibration.
Cameras need to be appropriately placed and configured to fully cover the capture volume.
Each camera must be mounted securely so that they remain stationary during capture.
Motive's camera settings used for calibration should ideally remain unchanged throughout the capture. Re-calibration may be required if there is any significant modifications to the settings that influence the data acquisition, such as camera settings, gain settings, and Filter Switcher settings.
Before performing system calibration, all extraneous reflections or unnecessary markers should ideally be removed or covered so that they are not seen by the cameras. If this is not possible, extraneous reflections can be ignored by applying masks over them in Motive.
Masks can be applied by clicking Mask in the calibration pane, and it will apply red masks over all of the reflections detected in the 2D camera view. Once masked, the pixels in the masked regions will entirely be filtered out from the data. Please note that Masks get applied additively, so if there are already masks applied in the camera view, clear them out first before applying a new one.
Active Wanding:
Applying masks to camera views only applies to calibration wands with passive markers. Active calibration wands are capable of calibrating the capture volume while the LEDs of all the cameras are turned off. If the capture has a large amount reflective material that cannot be moved, this method highly recommended.
Check the calibration pane to see if any of the cameras are seeing extraneous reflections or noise in their view.
Check the corresponding camera view to identify where the extraneous reflection is coming from, and if possible, remove them from the capture volume or cover them so that the cameras do not see them.
In the Calibration pane, click Mask to apply masks over all of the existing reflections in the view.
Masking from the Cameras Viewport
You should be careful when using the masking features because masked pixels are completely filtered from the 2D data. In other words, the data in masked regions will not be collected for computing the 3D data, and excessive use of masking may result in data loss or frequent marker occlusions. For this reason, all removable reflective objects must be taken out or covered before the using the masking tool so the masking can be minimized. After all reflections are removed or masked from the view, proceed onto the wanding process.
The wanding process is the core pipeline for collecting calibration sample into Motive. A calibration wand is waved in front of the cameras repeatedly throughout the volume, allowing all cameras to see the calibration markers. Through this process, each camera captures sample data points in order to compute their respective position and orientation in the 3D space.
It is important to understand the requirements of good wanding samples. For a streamline process, the following requirements must be met:
At least two, or more, cameras must see all of the three calibration markers simultaneously.
Cameras should only see calibration markers. If any other reflection or noise is detected during the wanding process, the sample will not be collected and may affect the calibration result negatively. For this reason, person who is wanding should not be wearing anything reflective.
The markers on the calibration wand must be in good quality. If the marker surface is damaged or scuffed, the system may struggle to collect wanding samples.
There are different types of calibration wands suited for different capture applications.\
Calibration Wands
CW-500: The CW-500 calibration wand has a wand-width of 500mm when the markers are placed in the configuration A. This wand is suitable for calibrating a large size capture volume because the markers are spaced out further apart, allowing the cameras to easily capture individual markers even at long distances.
CW-500 Active:Hosting the same dimensions as the CW-500, the active version is recommended for capture volumes that have a large amount of reflective material that cannot be removed. This wand calibrates the volume while the LEDs of all mounted cameras are turned off.
CW-250: The CW-250 calibration wand has a wand-width of 250mm. This wand is suitable for calibrating small to medium size volumes. With narrower wand-width, it allows cameras, that are set up in a smaller volume, to be able to easily capture all three calibration markers within the same frame. CW-500 wand can also be used like CW-250 wand if the markers are positioned at configuration B.
CWM-125 / CWM-250: Both CWM-125 and CWM-250 wands are designed for calibrating the system for precision capture applications. The accuracy of the calibrated wand width is most precise and reliable on these wands, and they are most suitable for doing precision capture in a small volume capture applications.
Before starting the wanding process, if any of the cameras are detecting extraneous reflections, return to the masking steps and make sure they are either masked or removed.
Set the Calibration Type. If you are calibrating a new capture volume, choose Full Calibration.
Under the Wand settings, specify the wand that you will be using to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly.
Double check the calibration setting. Once confirmed, press Start Wanding to start collecting the wanding sample. Here, do not have any specific camera selected if you wish to perform calibration for the entire camera system.
Start wanding. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Gently draw figure-eight repetitively with the wand to collect samples at varying orientations and cover as much space as possible for sufficient sampling. Wanding trails will be shown in colors on the 2D view. A table displaying the status of the wanding process will show up in the Calibration pane to monitor the progress. For best results, wand the volume evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
You'll want to wand until the camera squares in the Calibration pane turn from dark green (insufficient amount of samples) to light green (sufficient amount of samples). Once all the squares have turned light green the Start Calculating button will now be active.
After wanding throughout all areas of the volume, consult the each 2D view from the Camera Preview Pane to evaluate individual camera coverage. Each camera should be thoroughly covered with wand samples. If there are any large gaps, attempt to focus wanding on those to increase coverage. When sufficient amounts of calibration samples are collected by each camera, press Calculate in the Calibration Pane, and Motive will start calculating the calibration for the capture volume. Generally, 1,000-4,000 samples are enough. Samples above this threshold are unnecessary and can actually be detrimental to a calibration's accuracy.
Wanding Tips
Avoid waving the wand too fast. This may introduce bad samples.
Avoid wearing reflective clothing or accessories while wanding. This can introduce extraneous samples which can negatively affect the calibration result.
Try not to collect samples beyond 10,000. Extra samples could negatively affect the calibration.
Try to collect wanding samples covering different areas of each camera view. The status indicator on Prime cameras can be used to monitor the sample coverage on individual cameras.
Although it is beneficial to collect samples all over the volume, it is sometimes useful to collect more samples in the vicinity of the target regions where more tracking is needed. By doing so, calibration results will have a better accuracy in the specific region.
Marker Labeling Mode
When performing calibration wanding, please make sure the Marker Labeling Mode is set to the default Passive Markers Only setting. This setting can be found under Application Settings: Application Settings → Live-Reconstruction tab → Marker Labeling Mode. There are known problems with wanding in one of the active marker labeling modes. This applies for both passive marker calibration wands and IR LED wands.
For Prime series cameras, the LED indicator ring displays the status of the wanding process. As soon as the wanding is initiated, the LED ring will turn dark. When a camera is detecting all three markers on the calibration wand, a part of its LED ring will glow blue to indicate that the camera is collecting samples, and the clock-position of the blue light will indicate the wand position in the respective camera view. As calibration samples are collected by each camera, green lights will fill up around the ring to provide feedback on whether enough samples have been collected. Eventually, we want all of the cameras to be filled with a bright green light to make sure enough samples covering all areas of the camera view are collected. Also, starting from Motive 3.0, any cameras that do not have enough samples collected towards the end of the wanding process, the ring light will start glow in white.
For more information on camera status indicators, please visit our wiki page here.
Calibration Type
You can selected different calibration types before wanding: Full and Refine
Full: Calibrate cameras from scratch, discarding any prior known position of the camera group or lens distortion information. A Full calibration will also take the longest time to run.
Refine: Adjusts slight changes on the calibration of the cameras based on prior calibrations. This will solve faster than a Full calibration. Only use this if your previous calibration closely reflects the placement of cameras. In other words, Refine calibration only works if you do not move the cameras significantly from when you last calibrated them. Only slight modifications can be allowed in camera position and orientation, which often occurs naturally from the environment such as mount expansion.
Refinement results will be poor if a full calibration has not been completed previously on the selected cameras.
After sufficient marker samples have been collected, press Start Calculating to calibrate using collected samples. The time needed for the calculation varies depending on the number of cameras included in the setup as well as the number of collected samples. As Motive starts calculating, blue wanding paths will be displayed on the view panes, and Calibration pane will provide visual feedback on calibration result of each camera. If you click Show list, you can check amount of error on each camera also.
Tip: Calibration details for recorded Takes can also be reviewed. Select a Take in the Data pane, and related calibration results will be displayed under the Properties pane. This information is available only for Takes recorded in Motive 1.10 and above.
After the calculation, a calibration result will be reported in the Calibration pane. The result is directly related to the mean error and the calibration result tiers are (on order from worst to best): Poor, Fair, Good, Great, Excellent, and Exceptional. If the results are acceptable, press Continue to apply the calibration. If not, press cancel and repeat the wanding process. In general, if it reports anything below excellent, you might want to adjust camera settings, wanding techniques, and try again.
Calibration Result
The final step of the calibration process is setting the ground plane and the origin. This is accomplished by placing the calibration square in your volume and telling Motive where the calibration square is. Place the calibration square inside the volume where you want the origin to be located and the ground plane to be leveled to. The position and orientation of the calibration square will be referenced for setting the coordinate system in Motive. Align the calibration square so that it references the desired axis orientation.
The longer leg on the calibration square will indicate the positive z axis, and shorter leg will indicate the direction of the positive x axis. Accordingly, the positive y axis will automatically be directed upward in a right-hand coordinate system. Next step is to use the level indicator on the calibration square to ensure the orientation is horizontal to the ground. If any adjustment is needed, rotate the nob beneath the markers to adjust the balance of the calibration square.
After confirming that the calibration square is properly placed and detected by the Calibration pane, press Set Ground Plane. You may need to manually select the markers on the ground plane if Motive fails to auto-detect the ground plane. If needed, the ground plane can be adjusted later.
Custom calibration square can also be used to define the ground plane. A set of three markers will be needed, and for accurate ground plane, these markers need to form a right-angle with one arm longer than the other, just like the shape of the calibration square. When using a custom calibration square, select Custom in the drop-down menu, manually input the correct vertical offset and select the markers before setting the ground plane.
Vertical offset
The Vertical Offset is the offset distance between the center of markers on the calibration square and the actual ground. For custom calibration square, you will need to define this in order to take account of the offset distance and sets the global origin slightly below the markers. Accordingly, this value should correspond to the actual distance between the center of the marker and the lowest tip at the vertex of the calibration square. This setting can also be used when you want to place the ground plane at a specific elevation. A positive offset value will place the plane below the markers, and a negative value will place the plane above the markers.
Ground Plane Refinement feature is used to improve the leveling of the coordinate plane. To refine the ground plane, use the bottom page selector to access the refine page. Then, place several markers with a known radius on the ground, and adjust the vertical offset value to the corresponding radius. You can then select these markers in Motive and press Refine Ground Plane, and it will refine the leveling of the plane using the position data from each marker. This feature is especially useful when establishing a ground plane for a large volume, because the surface may not be perfectly uniform throughout the plane.
If you wish to adjust position and orientation of the global origin after the capture has been taken, you can apply the capture volume translation and rotation from the Calibration pane. For applying changes to recorded Takes, Anew set of 3D data must be reconstructed from the recorded 2D data after the modification has been applied.
Calibration files can be used to preserve calibration results. The information from the calibration is exported or imported via the CAL file format. Calibration files reduce the effort of calibrating the system every time you open Motive. Calibration files will be automatically saved into the default folders after each calibration but in general, it is suggested to export calibration before each capture session. By default, Motive loads the last calibration file that was created, this can be changed via the Application Settings.
Note: Whenever there is a change to the system setup (e.g. cameras moved) these calibration files will no longer be relevant and the system will need to be recalibrated.
The continuous calibration feature continuously monitors and refines the camera calibration to its best quality. When enabled, minor distortions to the camera system setup can be adjusted automatically without wanding the volume again. In other words, you can calibrate a camera system once and you will no longer have to worry about external distortions such as vibrations, thermal expansion on camera mounts, or small displacements on the cameras. For detailed information, read through the Continuous Calibration page.
Enabling/Disabling Continuous Calibration
Continuous calibration can be enabled, or disabled, from the Calibration Pane once a system has been calibrated. It will also show when the continue calibration has updated last time.
When capturing throughout a whole day, temperature fluctuations may degrade calibration quality and you will want to recalibrate the capture volume at different times of the day. However, repeating entire calibration process could be tedious and time-consuming especially with a high camera count setup. In this case, instead of repeating the entire calibration process, you can just record Takes with the wand waves and the calibration square, and use the take to re-calibrate the volume in the post-processing. This offline calibration can save calibration calculation time on the capture day because you can process the recorded wanding take in the post-processing instead. Also, the users can inspect the collected capture data and decide to re-calibrate the recorded Take only when any signs of degraded calibration quality is seen from the captures.
Offline Calibration Steps
1) Capture wanding/ground plane takes. At different times of the day, record wanding Takes that closely resembles the calibration wanding process. Also record corresponding ground plane Takes with calibration square set in the volume for defining the ground plane.
Whenever a system is calibrated, a Calibration Wanding file gets saved and it could be used to reproduce the calibration file through the offline calibration process.
2) Load the recorded Wanding _Take_. If you wish to re-calibrate the cameras for captured Takes during playback, load the wanding take that was recorded around the same time.
3) Motive: Calibration pane. In the Edit mode, press Start Wanding. The wanding samples from recorded 2D data will be loaded.
4) Motive: Calibration pane. Press Calculate, and wait until the calculation process is complete.
5) Motive: Calibration pane. Apply Result and export the calibration file. File tab → Export Camera Calibration.
6) Load the recorded Ground Plane _Take_.
7) Open the saved calibration file. With the Ground Plane Take loaded in Motive, open the exported calibration file, and the saved camera calibration will be applied to the ground plane take.
8) Motive: Perspective View. From 2D data of the Ground Plane Take, select the calibration square markers.
9) Motive: Calibration pane: Ground Plane. Set the Ground plane.
10) Motive: Perspective View. Switch back to the Live mode. The recorded Take is now re-calibrated.
The partial calibration feature allows you to update the calibration for some selection of cameras in a system. The way this feature works is by updating the position of the selected cameras relative to the already calibrated cameras. This means that you only need to wand in front of the selected cameras as long as there is at least one unselected camera that can also see the wand samples.
This feature is especially helpful for high camera count systems where you only need to adjust a few cameras instead of re-calibrating the whole system. One common way to get into this situation is by bumping into a single camera. Partial calibrations allow you to quickly re-calibrate the single bumped camera that is now out of place. This feature is also useful for those who need to do a calibration without changing the location of the ground plane. The reason the ground plane does not need to be reset is because as long as there is at least one unselected camera Motive can use that camera to retain the position of the ground plane relative to the cameras.
Partial Calibration Steps
Open the Calibration Pane.
Set Calibration Type: In most cases you will want to set this to Full, but if the camera only moved slightly Refine works as well.
Specify the wand type.
From the Calibration Pane, click Start Wanding. A pop-up dialogue will appear indicating that only selected cameras are being calibrated.
Choose Calibrate Selected Cameras from the dialogue window.
Wave the calibration wand mainly within the view of the selected cameras.
Click Calculate. At this point, only the selected cameras will have their calibration updated.
Notes:
This feature relies on the fact that the unselected cameras are in a good calibration state. If the unselected cameras are out of calibration, then using this feature will return bad calibration.
Partial calibration does not update the calibration of unselected cameras. However, the calibration report that Motive provides does include all cameras that received samples, selected or unselected.
The partial calibration process can also be used for adding new cameras onto existing calibration. Use Full calibration type in this case.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
The OptiTrack motion capture system is designed to track retro-reflective markers. However, active LED markers can also be tracked with appropriate customization. If you wish to use Active LED markers for capture, the system will ideally need to be calibrated using an active LED wand. Please contact us for more details regarding Active LED tracking.
This page provides some information on aligning a Rigid Body pivot point with a real object replicated 3D model.
Screenshots used on this page was captured in Motive 2.x. In Motive 3.x, translation of Rigid Body pivot point can be done by using the Rigid Body translations from the Builder pane. See below image for a screenshot of 3.x for the Builder and Properties pane of a Rigid Body.
When using streamed Rigid Body data to animate a real-life replicate 3D model, the alignment of the pivot point is necessary. In other words, the location of the Rigid Body pivot coincides with the location of the pivot point in the corresponding 3D model. If they are not aligned accurately, the animated motion will not be in a 1:1 ratio compared to the actual motion. This alignment is commonly needed for real-time VR applications where real-life objects are 3D modeled and animated in the scene. The suggested approaches for aligning these pivot points will be discussed on this page.
There are two methods for doing this. Using a measurement probe to sample 3D points to reference from, or simply using a reference grayscale view to align. The first method of creating and using a measurement probe is most accurate and recommended.
Step 1. Create a Rigid Body of the target object
First of all, create a Rigid Body from the markers on the target object. By default, the pivot point of the Rigid Body will be positioned at the geometrical center of the marker placement. Then place the object onto somewhere stable where it will stay stationary.
Step 2. Create a measurement probe.
For instructions on creating a measurement probe, please refer to Measurement Probe page. You can purchase our probe or create your own. All you need is 4 markers with a static relationship to a projected tip.
Step 3. Collect data points to outline the silhouette
Use the created measurement probe to collect sample data points that outlines the silhouette of your object. Mark all of the corners and other key features on the object.
Step 4. Attach 3D model
After 3D data points have been generated using the probe, attach your game geometry (obj file) to the Rigid Body by turning on the Model Replace property and importing the geometry under Attached Geometry property.
From the sampled 3D points, You can also export markers created from the probe to Maya or other content creation packages to generate models guaranteed to scale correctly.
Step 5. Translate the pivot point
Next step is to translate the 3D model so that the attached model aligns with the silhouette sample that we collected in Step 3. The model can be easily translated and rotated using the GIZMO tool. Move, rotate, and scale the asset unit it is aligned with the silhouette.
For accurate alignment, it will be easier to decrease the size of the marker visual. This can be changed from the Marker Diameter setting under the application settings panel.
Step 6. Copy transformation values
After you have translated, rotated, and scaled the pivot point of the Rigid Body to align the attached 3D model with the sampled data points, the transformation values will be shown under the Attached Geometry property.
Copy and paste this transformation parameter onto the Rigid Body location and orientation options under the Edit tab in the Builder pane. This will translate the pivot point of the Rigid Body in Motive, and align it with the pivot point of the 3D model.
Step 7. Zero all transformation values in the Attached Geometry section
Once the Rigid Body pivot point has been moved using the Builder pane, zero all of the transformation configurations under the Attached Geometry property for the Rigid Body.
Alternatively, if probe method is not applicable, you can also switch one of the cameras into grayscale view, right click on the camera in the Cameras view and select Make Reference. This will create a Rigid Body overlay in the Camera view pane to align the Rigid Body pivot using the similar approach as above.
Once the capture volume is calibrated and all markers are placed, you are now ready to capture Takes. In this page, we will cover key concepts and tips that are important for the recording pipeline. For real-time tracking applications, you can skip this page and read through the Data Streaming page.
There are two different modes in Motive: Live mode and Edit mode. You can toggle between two modes from the Control Deck or by using the (Shift + ~) hotkey.
Live Mode
The Live mode is mainly used when recording new Takes or when streaming a live capture. In this mode, all of the cameras are continuously capturing 2D images and reconstructing the detected reflections into 3D data in real-time.
Edit Mode
The Edit Mode is used for playback of captured Take files. In this mode, you can playback, or stream, recorded data. Also, captured Takes can be post-processed by fixing mislabeling errors or interpolating the occluded trajectories if needed.
Tip: Prime series cameras will illuminate in blue when in live mode, in green when recording, and turned-off in edit mode. See more at Camera Status Indicators.
Recording in Motive is triggered from the Control Deck when in the Live mode, and the recorded data
In Motive, capture recording is controlled from the Control Deck. In the Live mode, new Take** name** can be assigned in the name box or you can just simply start the recording and let Motive automatically generate new names on the fly. You can also create empty Takes in the Data Management pane for a better organization. To start the capture, select Live mode and click the recording button (red). In the control deck, record time and frames are displayed in (Hour:Minute:Second:Frames).
Tip: For Skeleton tracking, always start and end the capture with a T-pose or A-pose, so that the Skeleton assets can be redefined from the recorded data as well.
Tip: Efficient ways of managing Takes
Always start by creating session folders for organizing related Takes. (e.g. name of the tracked subject).
Plan ahead and create a list of captures in a text file or a spreadsheet, and you can create empty takes by copying and pasting the list into the Data Management pane (e.g. walk, jog, run, jump).
Once pasted, empty Takes with the corresponding names will be imported.
Select one of the empty takes and start recording. The capture will be saved with the corresponding name.
If the capture was unsuccessful, simply record the same Take again and another one will be recorded with a incremented suffix added at the end of the given Take name (e.g. walk_001, walk_002, walk_003). The suffix format is defined in the Application Settings.
When captured successfully, select another empty Take in the list and capture the next one.
When a capture is first recorded, both 2D data and real-time reconstructed 3D data is saved onto the Take. For more details on each data type, refer to the Data Types page.
2D data: The recorded Take file includes just the 2D object images from each camera.
3D data: The recorded Take file also includes reconstructed 3D marker data in addition to 2D data.
Throughout capture, you might recognize that there are different types of markers that appear in the 3D perspective view. In order to correctly interpret the tracking data, it is important to understand the differences between these markers. There are three different displayed marker types: markers, Rigid Body markers, and bone (or Skeleton) markers.
Marker data, labeled or unlabeled, represent the 3D positions of markers. These markers do not present Rigid Body or Skeleton solver calculations but locate the actual marker position calculated from the camera data. These markers are represented as a solid sphere in the viewport. By default, unlabeled markers are colored in white, and labeled markers will have colors that reflect the color setting in the Rigid Body or the corresponding bone.
Labeled Marker Colors:
Colors of the unlabeled markers can be changed from the Application Settings.
Colors of the Rigid Body labeled markers can be changed from the properties of the corresponding asset.
Colors of the markers can be changed from the Constraints XML file if needed.
Rigid Body markers or Skeleton bone markers are referred to as Marker Constraints. They appear as transparent spheres within a Rigid Body, or a Skeleton, and each sphere reflect the position that a Rigid Body, or a Skeleton, expects to find a 3D marker. When the asset definitions are created, it is assumed that the markers are fixed at the same location and does not move over the course of capture.
In order to view Marker Constraints, both the Marker Constraints visual aid option in the viewport and the Marker Constraints property on the corresponding asset must be enabled. This is enabled by default for Skeleton assets but this must be enabled for Rigid Bodies to view them. When the Rigid Body solver or Skeleton solver are tracking from the 3D markers, the marker reconstructions and Marker Constraints positions will closely align in the viewport.
For Rigid Body assets, when their asset definition is created, it expects the markers to be fixed in the same location and the object does not deform over the course of capture. Each Rigid Body is given a acceptable deflection property value. As long as the actual marker position is within the allowable deflection from the Marker Constraints position, the marker will be labeled. For Skeleton assets, as the body segments are not perfectly rigid, some amount of offset from the model marker position is allowed.
This page provides instructions on how to utilize the Gizmo tool for modifying asset definitions (Rigid Bodies and Skeletons) on the 3D Perspective View of Motive
Edit Mode: As of Motive 3.0, asset editing can only be performed in Edit mode
Solved Data: In order to edit asset definitions from a recorded Take, corresponding Solved Data must be removed before making the edit, and then recalculated.
The gizmo tools allow users to make modifications on reconstructed 3D markers, Rigid Bodies, or Skeletons for both real-time and post-processing of tracking data. This page provides instructions on how to utilize the gizmo tools.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Precise Position/Orientation: When translating or rotating the Rigid Body, you can CTRL + select a 3D reconstruction from the scene to precisely position the pivot point, or align a coordinate axis, directly on, or towards, the selected marker. Multiple reconstructions can be also be selected and their geometrical center (midpoint) will be used as the target reference.
Please note that the following tutorial videos were created in an older version of Motive. The workflow in 3.0 is slightly different and only requires you to select Translate, Rotate, or Scale from the 3D Viewport Toolbar selection dropdown to begin manipulating your Asset.
You can utilize the gizmo tools to modify skeleton bone lengths, joint orientations, or scale the spacing of the markers. Translating and rotating the skeleton assets will change how skeleton bone is positioned and oriented with respect to the tracked markers, and thus, any changes in the skeleton definition will affect the realistic representation of the human movement.
The scale tool modifies the size of selected skeleton segments.
The gizmo tools can also be used to edit positions of reconstructed markers.In order to do this, you must be working reconstructed 3D data in post-processing. In live-tracking or 2D mode doing live-reconstruction, marker positions are reconstructed frame-by-frame and it cannot be modified. The Edit Assets must be disabled to do this (Hotkey: T).
Translate
Using the translate tool, 3D positions of reconstructed markers can be modified. Simply click on the markers, turn on the translate tool (Hotkey: W), and move the markers.
Rotate
Using the rotate tool, 3D positions of a group of markers can be rotated at its center. Simply select a group of markers, turn on the rotate tool (Hotkey: E), and rotate them.\
Scale
Using the scale tool, 3D spacing of a group of makers can be scaled. Simply select a group of markers, turn on the scale tool (Hotkey: R) and scale their spacing.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
Captured tracking data can be exported in Comma Separated Values (CSV) format. This file format uses comma delimiters to separate multiple values in each row, and it can be imported by spreadsheet software or a programming script. Depending on which data export options are enabled, exported CSV files can contain marker data, Rigid Body data, and/or Skeleton data. CSV export options are listed in the following charts:
General Export Options
CSV Export Options
In the CSV file, Rigid Body markers have a physical marker column and a Marker Constraints column. They have nearly the same ID but are distinguished by the first 8 characters as uniquely identifiable.
When a marker is occluded in Motive, the Marker Constraints will display the last known position of where it thinks the marker should be in the CSV file. The actual physical marker will display a blank cell or null value since Motive cannot account for its actual location due to its occlusion.
When the header is disabled, this information will be excluded from the CSV files. Instead, the file will have frame IDs in the first column, time data on the second column, and the corresponding mocap data in the remaining columns.
CSV Headers
TIP: Occlusion in the marker data
Since device data is usually sampled at a higher rate than the camera system, the camera samples are collected at the center of the corresponding device data samples. For example, if the device data has 9 sub-frames for each camera frame sample, the camera tracking data will be recorded at every 5th frame of device data.
Force Plate Data: Each of the force plate CSV files will contain basic properties such as platform dimensions and mechanical-to-electrical center offset values. The mocap frame number, force plate sample number, forces (Fx/Fy/Fz), moments (Mx, My, Mz), and location of the center of pressure (Cx, Cy, Cz) will be listed below the header.
Analog Data: Each of the analog data CSV files contains analog voltages from each configured channel.
This page provides basic description of marker labels and instructions on labeling workflow in Motive.
Marker Label
Marker labels are basically software name tags that are assigned to trajectories of reconstructed 3D markers so that they can be referenced for tracking individual markers, Rigid Bodies, or Skeletons. Motive identifies marker trajectories using the assigned labels. Labeled trajectories can be exported individually, or combined together to compute positions and orientations of the tracked objects. In most applications, all of the target 3D markers will need to be labeled in Motive. There are two methods for labeling markers in Motive: auto-labeling and manual labeling, and both labeling methods will be covered in this page.
Monitoring Labels
Labeled or unlabeled trajectories can be identified and resolved from the following places in Motive:
There are two approaches to labeling markers in Motive:
Auto-label pipeline: Automatically label sets of Rigid Body markers and Skeleton markers using calibrated asset definitions.
Rigid body and Skeleton asset definitions contain information of marker placements on corresponding assets. This is recorded when the assets are first created, and the auto-labeler in Motive uses them to label a set of reconstructed 3D trajectories that resemble marker arrangements of active assets. Once all of the markers on active assets are successfully labeled, corresponding Rigid Bodies and Skeletons get tracked in the 3D viewport.
The auto-labeler runs in real-time during Live mode and the marker labels get saved onto the recorded TAKs. Running the auto-labeler again in post-processing will basically attempt to label the Rigid Body and Skeleton markers again from the 3D data.
From Data pane
Right-click to bring up the context menu
Click reconstruct and auto-label' to process selected Takes. The this pipeline will create a new set of 3D data and auto-label the markers from it.
This will label all the markers that matches the corresponding asset definition.
Marker Set is a list of labels, or marker names, that can be manually assigned to unlabeled markers. This can be created when there is a need to label individual markers in the scene that are not associated with a Rigid Body nor a Skeleton asset.
\
Under the drop-down menu in the Labels pane, select an asset you wish to label.
All of the involved markers will be displayed under the columns.
From the label list, select unlabeled or mislabeled markers.
Hiding Marker Labels
Labeling Tips
When working with Skeleton assets, label the hip segment first. The hip segment is the main parent segment, top of the segments hierarchy, where all other child segments are associated to. Manually assigning hip markers sometimes help the auto-labeler to label the entire asset.
Step 4. Select an asset that you wish to label.
Step 5. From the label columns, Click on a marker label that you wish to re-assign.
Step 6. Inspect behavior of a selected trajectory and its labeling errors and set the appropriate labeling settings (allowable gap size, maximum spike and applied frame ranges).
Step 7. Switch to the QuickLabel mode (Hotkey: D).
Step 9. When all markers have been labeled, switch back to the Select Mode.
Step 1. Start with 2D data of a captured Take with model assets (Skeletons and Rigid Bodies).
Step 3. Examine the reconstructed 3D data, and inspect the frame range where markers are mislabeled.
Step 5. Unlabel all trajectories you want to re-auto-label.
Step 6. Auto-Label the Take again. Only the unlabeled markers will get re-labeled, and all existing labels will be kept the same.
Step 7. Re-examine the marker labels. If some of the labels are still not assigned correctly from any of the frames, repeat the steps 3-6 until complete.
The general process for resolving labeling error is:
Identify the trajectory with the labeling error.
Determine if the error is a swap, an occlusion, or unlabeled.
Resolve the error with the correct tool.
Swap: Use the Swap Fix tool ( Edit Tools ) or just re-assign each label ( Labels panel ).
When manually labeling markers to fix swaps, set appropriate settings for the labeling direction, max spike, and selected range settings.
Occlusion: Use the Gap Fill tool ( Edit Tools ).
Unlabeled: Manually label an unlabeled trajectory with the correct label ( Labels panel ).
This page explains different types of captured data in Motive. Understanding these types is essential in order to fully utilize the data-processing pipelines in Motive.
2D data is the foundation of motion capture data. It mainly includes the 2D frames captured by each camera in a system.
Recorded 2D data can be reconstructed and auto-labeled to derive the 3D data.
3D tracking data is not computed yet. The tracking data can be exported only after reconstructing the 3D data.
In playback of recorded 2D data, 3D data will be Live-reconstructed into 3D data and reported in the 3D viewport.
Reconstructed 3D marker positions.
Marker labels can be assigned.
Assets are modeled and the tracking information is available.
Deleting 3D data for a single _Take_
When frame range is not selected, it will delete 3D data from the entire frame. When a frame range is selected from the Timeline Editor, this will delete 3D data in the selected ranges only.
Deleting 3D data for multiple _Takes_
When a Rigid Body or Skeleton exists in a Take, Solved data can be recorded. From the Assets pane, right-click one or more asset and select Solve from the context menu to calculate the solved data. To delete, simply click Remove Solve.
Deleting labels for a single _Take_
When no frame range is selected, it will unlabel all markers from all Takes. When a frame range is selected from the Timeline Editor, this will unlabel markers in the selected ranges only.
Deleting labels for multiple _Takes_
Even when a frame range is selected from the timeline, it will unlabel all markers from all frame ranges of the selected Takes.
General Export Options
C3D Specific Export Options
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (front/back), the y-axis represents the mediolateral axis (left/right), and the z-axis represents the longitudinal axis (up/down).
MotionBuilder Compatible Axis Convention
This is a preset convention for exporting C3D files for use in Autodesk MotionBuilder. Even though Motive and MotionBuilder both use the same coordinate system, MotionBuilder assumes biomechanics standards when importing C3D files (negative X axis to positive X axis; positive Z to positive Y; positive Z to positive Y). Accordingly, when exporting C3D files for MotionBuilder use, set the Axis setting to MotionBuilder Compatible, and the axes will be exported using the following convention:
Motive: X axis → Set to negative X → Mobu: X axis
Motive: Y axis → Set to positive Z → Mobu: Y axis
Motive: Z axis → Set to positive Y → Mobu: Z axis
There is an known behavior where importing C3D data with timecode doesn't accurately show up in MotionBuilder. This happens because MotionBuilder sets the subframe counts in the timecode using the playback rate inside MotionBuilder instead of using the rate of the timecode. When this happens you can set the playback rate in MotionBuilder to be the same as the rate of the timecode generator (e.g. 30 Hz) to get correct timecode. This happens only with C3D import in MotionBuilder, FBX import will work fine without the change to the playback rate.
Motive can export tracking data in BioVision Hierarchy (BVH) file format. Exported BVH files do not include individual marker data. Instead, a selected skeleton is exported using hierarchical segment relationships. In a BVH file, the 3D location of a primary skeleton segment (Hips) is exported, and data on subsequent segments are recorded by using joint angles and segment parameters. Only one skeleton is exported for each BVH file, and it contains the fundamental skeleton definition that is required for characterizing the skeleton in other pipelines.
Notes on relative joint angles generated in Motive: Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis.
General Export Options
BVH Specific Export Options
Common tracking errors include marker occlusions and labeling errors. Labeling errors include unlabeled markers, mislabeled markers, and label swaps. Fortunately, label errors can be corrected simply by reassigning proper labels to markers. Markers may be hindered from camera views during capture. In this case, the markers will not be reconstructed into 3D space and introduce a gap in the trajectory, which are referred to as marker occlusions. Marker occlusions are critical because the trajectory data is not collected at all, and retaking the capture could be necessary if the missing marker is significant to the application. For these occluded markers, Edit Tools also provide interpolation pipelines to model the occluded trajectory using other captured data points. Read through this page to understand each of data editing methods in detail.
Steps in Editing
General Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Select a marker that is often occluded or misplaced.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
The trimming feature can be used to crop a specific frame range from a Take. For each round of trim, a copied version of the Take will be automatically achieved and backed up into a separate session folder.
Steps for trimming a Take
1) Determine a frame range that you wish to extract.
3) After zooming into the desired frame range, click Edit > Trim Current Range to trim out the unnecessary frames.
4) A dialog box will pop up asking to confirm the data removal. If you wish to reset the frame numbers upon trimming the take, select the corresponding check box on the pop-up dialog.
When a marker is unlabeled momentarily, the color of tracked marker switches between white (labeled) and orange (unlabeled) by the default color setting. Mislabeled markers may have large gaps and result in a crooked model and trajectory spikes. First, explore captured frames and find where the label has been misplaced. As long as the target markers are visible, this error can easily be fixed by reassigning the correct labels. Note that this method is preferred over editing tools because it conserves the actual data and avoids approximation.
Frame Range: If you have a certain frame range selected from the timeline, data edits will be applied to the selected range only.
The Tails method trims, or removes, a few data points before and after a gap. Whenever there is a gap in a marker trajectory, slight tracking distortions may be present on each end. For this reason, it is usually beneficial to trim off a small segment (~3 frames) of data. Also, if these distortions are ignored, they may interfere with other editing tools which rely on existing data points. Before trimming trajectory tails, check all gaps to see if the tracking data is distorted. After all, it is better to preserve the raw tracking data as long as they are relevant. Set the appropriate trim settings, and trim out the trajectory on selected or all frame. Each gap must satisfy the gap size threshold value for it to be considered for trimming. Each trajectory segment also needs to satisfy the minimum segment size, otherwise, it will be considered as a gap. Finally, the Trim Size value will determine how many leading and trailing trajectory frames are removed from a gap.
Smart Trim
The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points than others. This feature determines whether each end of the gap is suspicious with errors, and deletes an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Gap filling is the primary method in the data editing pipeline, and this feature is used to remodel the trajectory gaps with interpolated marker positions. This is used to accommodate the occluded markers in the capture. This function runs mathematical modeling to interpolate the occluded marker positions from either the existing trajectories or other markers in the asset. Note that interpolating a large gap is not recommended because approximating too many data points may lead to data inaccuracy.
New to Motive 3.0; For Skeletons and Rigid Bodies only Model Asset Markers can be used to fill individual frames where the marker has been occluded. Model Asset markers must be first enabled on the Properties Pane when the desired asset is selected and then they must be enabled for selection in the Viewport. Now when frames are encountered where the marker is lost from camera view, select the associated Model Asset Marker in the 3D view; right click for the context menu and select 'Set Key'.
There are four different interpolation options offered in Edit Tools: constant, linear, cubic and pattern-based. First three interpolation methods (constant, linear, and cubic) look at the single marker trajectory and attempt to estimate the marker position using the data points before and after the gap. In other words, they attempt to model the gap via applying different degrees of polynomial interpolations. The other two interpolation options (pattern-based and model-based) reference visible markers and models to the estimate occluded marker position.
Constant
Applies zero-degree approximation, assumes that the marker position is stationary and remains the same until the next corresponding label is found.
Linear
Applies first-degree approximation, assuming that the motion is linear, to fill the missing data. Only use this when you are sure that the marker is moving at linear motion.
Cubic
Applies third-degree polynomial interpolation, cubic spline, to fill the missing data in the trajectory.
Pattern based
This refers to trajectories of selected reference markers and assumes the target marker moves along in a similar pattern. The Fill Target marker is specified from the drop-down menu under the Fill Gaps tool. When multiple markers are selected, a Rigid Body relationship is established among them, and the relationship is used to fill the trajectory gaps of the selected Fill Target marker as if they were all attached to a same Rigid Body. The following list is the general workflow for using the Pattern Based interpolation:
Select both reference markers and the target marker to fill.
Set an appropriate Max. Gap Size limit.
Select the Pattern Based interpolation option.
Specify the Fill Target marker in the drop-down menu.
Click the Fill Selected/Fill All/Fill Everything.
The curves tool applies a noise filter (low-pass Butterworth, 4th degree) to trajectory data, and this modifies the marker trajectory smoother. This is a bi-directional filter that does not introduce phase shifts. Using this tool, any vibrating or fluttering movements are filtered out. First, set the cutoff frequency for the filter and define how strongly your data will be smoothed. When the cutoff frequency is set high, only high-frequency signals are filtered. When the cutoff frequency is low, trajectory signals at a lower frequency range will also be filtered. In other words, a low cutoff frequency setting will smooth most of the transitioning trajectories, whereas high cutoff frequency setting will smooth only the fluttering trajectories. High-frequency data are present during sharp transitions, and this can also be introduced by signal noise. Commonly used ranges for Filter Cutoff Frequency are between 7 Hz to 12 Hz, but you may want to adjust the value higher for fast and sharp motions to avoid softening motion transitions need to stay sharp.\
This tool is used for quickly deleting any marker trajectories that exist only for a few frames. Markers that appear only momentarily are likely happening due to noise in the data. If you wish to clean up these short-lived trajectories to further clean up the data, the fragments tool can be used. You will just need to set the minimum frame percentage under the settings. Then, when you click delete, individual marker trajectories that are shorter than the percentage defined will be deleted.
Various types of files, including the tracking data, can be exported out from Motive. This page provides information on what file formats can be exported from Motive and instructions on how to export them.
Once captures have been recorded into Take files and the corresponding 3D data have been reconstructed, tracking data can be exported from Motive in various file formats.
Exporting Tracking Data
If the recorded Take includes Rigid Body or Skeleton trackable assets, make sure all of the Rigid Bodies and Skeletons are Solved prior to exporting. The solved data will contain positions and orientations of each Rigid Body and Skeleton. If changes have been made to either the Rigid Body or Skeleton, you will need to solve the assets again prior to exporting.
Please note that if you have Assets that are unsolved and just wish to export reconstructed Marker data, you can toggle off Rigid Bodies and Bones (Skeletons) from the Export window (see image below).
In the export dialog window, the frame rate, the measurement scale, and the frame range of exported data can be configured. Additional export settings are available for each export file formats. Read through below pages for details on export options for each file format:
Exporting a Single Take
Step 3. On the export dialogue window, select a file format and configure the corresponding export settings.
To export the entire frame range, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export a specific frame range, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 4. Click Save.
Working Range:
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range will recorded tracking data be played back and shown on the graphs. This range can also be used to output specific frame ranges when exporting tracking data from Motive.
The working range can be set from the following places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
Exporting Multiple Takes
Step 2. Right-click on the selected Takes and click Export Tracking Data from the context menu.
Step 3. An export dialogue window will show up for batch exporting tracking data.
Step 4. Select the desired output format and configure the corresponding export settings.
Step 5. Select frame ranges to export under the Start Frame and the End Frame settings. You can either export entire frame ranges or specified frame ranges on all of the Takes. When exporting specific ranges, desired working ranges must be set for each respective Takes.
To export entire frame ranges, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export specific frame ranges, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 6. Click Save.
Motive Batch Processor:
Motive exports reconstructed 3D tracking data in various file formats and exported files can be imported into other pipelines to further utilize capture data. Available export formats include CSV, C3D, FBX, BVH, and TRC. Depending on which options are enabled, exported data may include reconstructed marker data, 6 Degrees of Freedom (6 DoF) Rigid Body data, or Skeleton data. The following chart shows what data types are available in different export formats:
CSV and C3D exports are supported in both Motive Tracker and Motive Body licenses. FBX, BVH, and TRC exports are only supported in Motive Body.
When an asset definition is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to the File menu and select Export Assets to export all of the assets in the Live-mode or in the current TAK file(s). You can also use File → Export Profile to export other software settings including the assets.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
CSV Export: When exporting tracking data into CSV, additional CSV files will be exported for each of the NI-DAQ devices in a Take. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. The voltage amplitude of each analog channel will be listed. Also, mocap frame rate to device sampling ratio is included since analog data is usually sampled at higher sampling rates.
Note
The coordinate system used in Motive (y-up right-handed) may be different from the convention used in the biomechanics analysis software.
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
Reference Video Type: Only compressed MJPEG reference videos can be recorded and exported from Motive. Export for raw grayscale videos is not supported.
Media Player: The exported videos may not be playable on Windows Media Player, please use a more robust media player (e.g. VLC) to play the exported video files.
Sample Skeleton Label XML File
In Motive, Skeleton assets are used for tracking human motions. These assets auto-label specific sets of markers attached to human subjects, or actors, and create skeletal models. Unlike Rigid Body assets, Skeleton assets require additional calculations to correctly identify and label 3D reconstructed markers on multiple semi-Rigid Body segments. In order to accomplish this, Motive uses pre-defined Skeleton Marker Set templates, which is a collection of marker labels and their specific positions on a subject. According to the selected Marker Set, retroreflective markers must be placed on pre-designated locations of the body. This page details instructions on how to create and use Skeleton assets in Motive.
Note:
Motive license: Skeleton features are supported only in Motive:Body or Motive:Body - Unlimited.
Skeleton Count: Standard Motive:Body license supports up to 3 Skeletons. For tracking higher number of Skeletons, activate with Motive: Body - Unlimitted license.
Height requirement: For Skeleton tracking, the subject must be between 1'7" ~ 9' 10" tall.
Use the default create layout to open related panels that are necessary for Skeleton creation. (CTRL + 2).
Attaching markers directly onto a person’s skin can be difficult because of hair, oil, and moisture from sweat. Plus, dynamic human motions tend to move the markers during capture, so use appropriate skin adhesives for securing marker bases onto the skin. Alternatively, mocap suits allow velcro marker bases to be used.
Joint Markers
Joint markers need to be placed carefully along corresponding joint axes. Proper placements will minimize marker movements during a range of motions and will give better tracking results. To accomplish this, ask the subject to flex and extend the joint (e.g. knee) a few times and palpate the joint to locate the corresponding axis. Once the axis is located, attach the markers along the axis where skin movement is minimal during a range of motion.
Wipe off any moisture or oil on the skin before attaching the marker.
Avoid wearing clothing or shoes with reflective materials since they can introduce extraneous reflections.
Tie back hair which can occlude the markers around the neck.
Remove reflective jewelry.
Place markers in an asymmetrical arrangement by offsetting the related segment markers (markers that are not on joints) at slightly different height.
Additional Tips
All markers need to be placed at the respective anatomical landmarks.
Place markers where you can palpate the bone or where there is less soft tissue in between. These spots have fewer skin movements and provide secure marker attachment.
Joint markers are vulnerable to skin movements because of the range of motion in the flexion and extension cycle. In order to minimize the influence, a thorough understanding of the biomechanical model used in the post-processing is necessary. In certain circumstances, the joint line may not be the most appropriate location. Instead, placing the markers slightly superior to the joint line could minimize the soft tissue artifact, still taking care to maintain parallelism with the anatomical joint line.
Use appropriate adhesives to place markers and make sure they are securely attached.
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Next step is to select the Skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the Skeleton. This is set to the T-pose by default.
Step 7.
Step 8.
Click Create to create the Skeleton. Once the Skeleton model has been defined, confirm all Skeleton segments and assigned markers are located at expected locations. If any of the Skeleton segment seems to be misaligned, delete and create the Skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
Reset Skeleton Tracking
When Skeleton tracking is not acquired successfully during the capture for some reason, you can use the CTRL + R hotkey to trigger the solver to re-boot the Skeleton asset.
A proper calibration posture is necessary because the pose of the created Skeleton will be calibrated from it. Read through the following explanations on proper T-poses and A-poses.
T pose
The T-pose is commonly used as the reference pose in 3D animation to bind two characters or assets together. Motive uses this pose when creating Skeletons. A proper T-pose requires straight posture with back straight and head looking directly forward. Both arms are stretched to each side, forming a “T” shape. Both arms and legs must be straight, and both feet need to be aligned parallel to each other.
A pose
Palms Down: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing downwards.
Palms forward: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing forward. Be careful not to over rotate the arm.
Elbows Bent: Similar to all other A-poses. arms approximately 40 degrees, bend elbows so that forearms point towards the front. Palms facing downwards, both forearms aligned.
Calibration markers exists only in the biomechanics Marker Sets.
Many Skeleton Marker Sets do not have medial markers because they can easily collide with other body parts or interfere with the range of motion, all of which increase the chance of marker occlusions.
However, medial markers are beneficial for precisely locating joint axes by associating two markers on the medial and lateral side of a joint. For this reason, some biomechanics Marker Sets use medial markers as calibration markers. Calibration markers are used only when creating Skeletons but removed afterward for the actual capture. These calibration markers are highlighted in red from the 3D view when a Skeleton is first created.
Existing Skeleton assets can be recalibrated using the existing Skeleton information. Basically, the recalibration recreates the selected Skeleton using the same Skeleton Marker Set. This feature recalibrates the Skeleton asset and refreshes expected marker locations on the assets.
Skeleton recalibration does not work with Skeleton templates with added markers.
Skeleton Marker Sets can be modified slightly by adding or removing markers to or from the template. Follow the below steps for adding/removing markers. Note that modifying, especially removing, Skeleton markers is not recommended since changes to default templates may negatively affect the Skeleton tracking when done incorrectly. Removing too many markers may result in poor Skeleton reconstructions while adding too many markers may lead to labeling swaps. If any modification is necessary, try to keep the changes minimal.
You can add or remove Marker Constraints from a Rigid Body or a Skeleton using the Builder pane. This is basically adding or removing markers to the existing Rigid Body and/or Skeleton definition. Follow the below steps to add or remove markers:
To Add
Select a Skeleton segment that you wish to add extra markers onto.
Then, CTRL + left-click on the marker that you wish to add to the template.
On the Marker Constraints tool in the Builder pane, click + to add and associate the selected marker to the selected segment.
Reconstruct and Auto-label the Take.
To Remove
[Optional] Under the advanced properties of the target Skeleton, enable Marker Lines property to view which markers are associated with different Skeleton bones.
Select the Skeleton segment that you wish to modify and select the associated Marker Constraints that you wish to dissociate.
Delete the association by clicking on the "-" in the Constraints pane while a marker is selected in the Constraints pane.
Reconstruct and Auto-label the Take.
When asset definitions are exported to a MOTIVE user profile, the profile stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new asset in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use the File menu → Export Profile to export other software settings including the assets.
To export Skeleton constraints XML file
To import Skeleton constraints XML file
This page provides information and instructions on how to utilize the Probe Measurement Kit.
Measurement probe tool utilizes the precise tracking of OptiTrack mocap systems and allows you to measure 3D locations within a capture volume. A probe with an attached Rigid Body is included with the purchased measurement kit. By looking at the markers on the Rigid Body, Motive calculates a precise x-y-z location of the probe tip, and it allows you to collect 3D samples in real-time with sub-millimeter accuracy. For the most precise calculation, a probe calibration process is required. Once the probe is calibrated, it can be used to sample single points or multiple samples to compute distance or the angle between sampled 3D coordinates.
Measurement kit includes:
Measurement probe
Calibration block with 4 slots, with approximately 100 mm spacing between each point.
Creating a probe using the Builder pane
Under the Type drop-down menu, select Probe. This will bring up the options for defining a Rigid Body for the measurement probe.
Select the Rigid Body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining Rigid Body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Caution
The probe tip MUST remain fitted securely in the slot on the calibration block during the calibration process.
Also, do not press in with the probe since the deformation from compressing could affect the result.
Note: Custom Probes
It's highly recommended to use the Probe kit when using this feature. With that being said, you can also use any markered object with a pivot arm to define a custom probe in Motive, but when a custom probe is used, it may have less accurate measurements; especially if the pivot arm and the object are not rigid and/or if any slight translation occurs during the probe calibration steps.
Using the Probe pane for sample collection
Place the probe tip on the point that you wish to collect.
Click Take Sample on the Measurement pane.
Collecting additional samples will provide distance and angles between collected samples.
As the samples are collected, their coordinate data will be written out into the CSV files automatically into the OptiTrack documents folder which is located in the following directory: C:\Users\[Current User]\Documents\OptiTrack. 3D positions for all of the collected measurements and their respective RMSE error values along with distances between each consecutive sample point will be saved in this file.
Also, If needed, you can trigger Motive to export the collected sample coordinate data into a designated directory. To do this, simply click on the export option on the Probe pane.
A Motive Body license can export tracking data into FBX files for use in other 3D pipelines. There are two types of FBX files: Binary FBX and ASCII FBX.
Notes for MotionBuilder Users
When exporting tracking data into MotionBuilder in the FBX file format, make sure the exported frame rate is supported in MotionBuilder (Mobu). In Mobu, there is a select set of playback frame rate that are supported, and the rate of the exported FBX file must agree in order to play back the data properly.
If there is a non-standard frame rate selected that is not supported, the closest supported frame rate is applied.
Exported FBX files in ASCII format can contain reconstructed marker coordinate data as well as 6 Degree of Freedom data for each involved asset depending on the export setting configurations. ASCII files can also be opened and edited using text editor applications.
FBX ASCII Export Options
Binary FBX files are more compact than ASCII FBX files. Reconstructed 3D marker data is not included within this file type, but selected Skeletons are exported by saving corresponding joint angles and segment lengths. For Rigid Bodies, positions and orientations at the defined Rigid Body origin are exported.
FBX Binary Export Options
This page covers basic types of trackable assets in Motive. The assets in Motive are used for both tracking of the objects and labeling of 3D markers in Motive, and they are managed under the Assets pane which can be opened by clicking on the icon. Each type of asset is further explained in the related pages.
In Motive, the recorded mocap data is stored in a file format called Take (TAK), and multiple Take files can be grouped within a session folder. The Data pane is the primary interface for managing capture files in Motive. This pane can be accessed from the icon on the main Toolbar, and it contains a list of session folders and the corresponding Take files that are recorded or loaded in Motive.
Click the button on the toolbar at the bottom of the Data pane to hide or expand the list of open Session Folders.
The active Session Folder is noted with a flag icon. To switch to a different folder, left-click on the folder name in the Session list.
When needed, an additional Viewer pane can be opened under the View tab or by clicking the icon on the main toolbar.
Function | Default Control |
---|---|
Switching to Live Mode in Motive using the control deck.
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.
The Application Settings can be accessed under the Edit tab or by clicking the icon on the main toolbar.
Square Type | Descriptions |
---|---|
Assets pane: While the markers are selected in Motive, click on the add button in the Assets pane.
Position and orientation of a tracked Rigid Body can be monitored in real-time from the Info pane. You can simply select a Rigid Body in Motive, open the Info pane, and access the Rigid Bodies tool from the to view respective real-time tracking data of the selected Rigid Body.
By default, Motive will start up on the calibration layout containing necessary panes for the calibration process. This layout can also be accessed by clicking on a calibration layout from the top-right corner , or by using the Ctrl+1 hotkey.
The Calibration pane will guide you through the calibration process. This pane can be accessed by clicking on the icon on the toolbar or by entering the calibration layout from the top-right corner . For a new system calibration, click the New Calibration button and it will take you to the next step.
When the cameras detect reflections in their view, it will be indicated with a warning sign to alert which cameras are seeing reflections; for Prime series cameras, the indicator LED ring will also light up in white.
Masks can also be applied from the Cameras viewport if needed. In the view pane, while the cameras view is selected, click on the gear icon on the toolbar and options to apply auto-mask or clear existing masks will be listed. You can also click on the icon to switch to different modes for manually applying and/or erasing masks.
Category | Description |
---|---|
In Motive, all of the recorded capture files are managed through the Data pane. Each capture will be saved in a Take (TAK) file, which can be played back in the Edit mode later. Related Take files can be grouped within session folders. Simply create a new folder in the desired directory and load the folder onto the Data pane. Currently selected session folder is indicated with the flag symbol (), and all newly recorded Takes will be saved in this folder.
Option | Description |
---|
CSV Options | Description |
---|
Rigid Body markers or Skeleton bone markers are referred to as. They appear as transparent spheres within a Rigid Body, or a Skeleton, and each sphere reflect the position that a Rigid Body, or a Skeleton, expects to find a 3D marker. When the asset definitions are created, it is assumed that the markers are fixed at the same location and do not move over the course of capture.
Row | Description |
---|
When there is an occlusion of a marker, the CSV file will contain blank cells. This can interfere when running a script to process the CSV data. It is recommended to optimize the system setup to reduce occlusions. To omit unnecessary frame ranges with frequent marker occlusions, select the frame range with the most complete tracking results. Another solution to this is to use to interpolate missing trajectories in post-processing.
For Takes containing force plates ( or ) or data acquisition () devices, additional CSV files will be exported for each connected device. For example, if you have two force plates and a NI-DAQ device in the setup, a total 4 CSV files will be created when you export the tracking data from Motive. Each of the exported CSV files will contain basic properties and settings in its header (if Header Information is selected), including device information and sample counts. Also, the mocap frame rate to device sampling rate ratio is included since force plate and analog data are sampled at higher sampling rates.
Solved Data: After editing marker data in a recorded Take, corresponding must be updated.
: From the 3D viewport, check the Marker Labels in the visual aids option to view marker labels for selected markers.
: The Labels pane lists out all of the marker labels and corresponding percentage gap for each label. The color of the label also indicates whether if the label is present or missing at the current frame.
: For frames where the selected label is not assigned to any markers, the timeline scrubber gets highlighted in red. Also, the tracks view of this pane provides a list of labels and their continuity in a captured Take.
Manual Label: Manually label individual markers using the .
For tracking Rigid Bodies and Skeletons, Motive can use the to automatically label associated markers both in real-time and post-processing. The auto-labeler uses references assets that are enabled, or assets that are checked in the , to search for a set of markers that matches with the definition and assign pre-defined labels throughout the capture.
There are times, however, when it is necessary to manually label a section or all of a trajectory, either because the markers of a Rigid Body or a Skeleton were misidentified (or unidentified) during capture or because individual markers need to be labeled without using any tracking assets. In these cases, the in Motive is used to perform manual labeling of individual trajectories. Manual labeling workflow is supported only in post-processing of capture when a Take file (TAK) has been loaded with 3D data as its playback type. In case of only capture, the Take must be Reconstructed first in order to assign, or edit, the marker labels in its 3D data. This manual labeling process, along with is typically referred to as post processing of mocap data.
Select Takes from the
Note: Be careful when reconstructing a Take again either by Reconstruct or Reconstruct and Auto-label, because it will overwrite the 3D data and any post-processing edits on trajectories and marker labels will be discarded. Also, for Takes involving Skeleton assets, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again if the Skeletons are never in well-trackable poses throughout the captured Take. This is another reason why you want to start a capture with a calibration pose (e.g. ).
Labels in the Marker Set, Rigid Body, and Skeleton assets are managed using the Constraints pane. Please refer to the to see how to add and/or modify marker labels. Once the labels are added, the Labels pane can be used to assign them onto markers.
Read more at page.
The is used to assign, remove, and edit marker labels in the . The Tracks View under the can be used in conjunction with the Labels pane to monitor which markers and gaps are associated. The Labels pane is also used to examine the number of occluded gaps in each label, and it can be used along with the for complete post-processing.
Using the Labels pane, you can assign marker labels for each asset (Marker Set, Rigid Body, and Skeleton) via the QuickLabel Mode . The Labels pane also shows a list of labels involved in the Take and their corresponding percent completeness values. The percent completeness values indicate frame percentages of a Take for which the trajectory has been labeled. If the trajectory has no gaps (100% complete), no number will be shown. You can use this pane together with the to quickly locate gaps in a trajectory.
For a given frame, all labels are color-coded. For each frame of 3D data, assigned marker labels are shown in white, labels without reconstructions are shown in red, and unlabeled reconstructions are shown in orange; similar to how they are presented in the .
See the page for detailed explanation on each option.
The QuickLabel mode allows you to tag labels with single-clicks in the view pane, and it is a handy way to reassign or modify marker labels throughout the capture. When the QuickLabel mode is toggled, the mouse cursor switches to a finger icon with the selected label name attached next to it. Also, when the display label option is enabled in the , all of assigned marker labels will be displayed next to each marker in the , as shown in the image below. Select the marker set you wish to label, and tag the appropriate labels to each marker throughout the capture.
When assigning labels using the Quick Label Mode, the labeling scope is configured from the labeling range settings. You can restrict the labeling operation to apply from the current frame backward, current frame forward, or both depending on the trajectory. You may also restrict labeling operations to apply the selected label to all frames in the Take, to a selected frame range, or to a trajectory 'fragment' enclosed by gaps or spikes. The fragment/spike setting is used by default and this best identifies mislabeled frame ranges and assigns marker labels. See the page for details on each feature.
Inspect the behavior of the selected trajectory and decide whether you want to apply the selected label to frames forward or frames backward or both. This option can be selected from on the Labels pane.
Switch to QuickLabeling Mode (Hotkey: D).
In the pane. Assign the selected label to a marker. If the Increment Option () is set under the Labels pane, the label selection in the Labels pane will automatically advance each time you assign them.
After assigning all labels, switch back to normal Select Mode .
If the marker labels are set to visible in the , Motive will show all of the marker labels when entering the QuickLabel mode. To hide all of the marker labels from showing up in the viewport, you can click on the visual aids option in the perspective view, and uncheck marker labels.
The following section provides the general labeling steps in Motive. Note that the labeling workflow is flexible and alternative approaches to the steps listed in this section could also be used. Utilize the auto-labeling pipelines in combination with the to best reconstruct and label the 3D data of your capture.
Use the to monitor occlusion gaps and labeling errors as you post-process capture Takes
When using the , choose the most appropriate labeling range settings (all, selected, spike, or fragment) to efficiently label selected trajectories.
can increase the speed of the workflow. Use Z and Shift+Z hotkeys to quickly find gaps in the selected trajectory.
Show/Hide Skeleton visibility under the visual aids options in the to have a better view on the markers when assigning marker labels.
Toggle Skeleton selectability under the selection option in the to use the Skeleton as a visual aid without it getting in the way of marker data.
Show/Hide Skeleton sticks and marker colors under the visual aids in the options for intuitive identification of labeled markers as you tag through Skeleton markers.
For Skeleton assets, the property can be utilized to display tracking errors on Skeleton segments.
Step 1. In the , Reconstruct and auto-label the take with all of the desired assets enabled.
Step 2. In the , examine the trajectories and navigate to the frame where labeling errors are frequent.
Step 3. Open the .
Step 8. On the , assign the labels onto the corresponding marker reconstructions by clicking on them.
Step 2. Reconstruct and Auto-Label, or just Reconstruct, the Take with all of the desired assets enabled under the . If you use reconstruct only, you can skip step 3 and 5 for the first iteration.
Step 4. Using the , manually fix/assign marker labels, paying attention to your label settings (direction, max gap, max spike, selected duration).
For more data editing options, read through the page.
There are three different types of data: 2D data, 3D data, and Solved data. Each type of data will be covered in detail throughout this page, but basically, 2D Data is the captured camera frame data, 3D Data is the reconstructed 3-dimensional marker data, and Solved data is the calculated positions and orientations of and segments.
Motive saves tracking data into a Take file (TAK extension), and when a capture is initially recorded, all of the 2D data, real-time reconstructed 3D data, and solved data are saved onto a Take file. Recorded 3D data can be post-processed further in , and when needed, a new set of 3D data can be re-obtained from saved 2D data by performing the reconstruction pipelines. From the 3D data, Solved data can be derived.
Available data types are listed on the . When you open up a Take in , the loaded data type will be highlighted at the top-left corner of the 3D viewport. If available, 3D Data will be loaded first by default, and the 2D data can be accessed by entering the from the Data pane.
Images in recorded 2D data depend on the , also called the video type, of each camera that was selected at the time of the capture. Cameras that were set to reference modes (MJPEG grayscale images) record reference videos, and cameras that were set to tracking modes (object, precision, segment) record 2D object images which can be used in the reconstruction process. The latter 2D object data contains information on x and y centroid positions of the captured reflections as well as their corresponding sizes (in pixels) and roundness, as shown in the below images.
Using the 2D object data along with the camera calibration information, 3D data is computed. Extraneous reflections that fail to satisfy the 2D object filter parameters (defined under ) get filtered out, and only the remaining reflections are processed. The process of converting 2D centroid locations into 3D coordinates is called Reconstruction, which will be covered in the later section of this page.
3D data can be reconstructed either in real-time or in post-capture. For real-time capture, Motive processes captured 2D images on a per-frame basis and streams the 3D data into external pipelines with extremely low processing latency. For recorded captures, the saved 2D data can be used to create a fresh set of 3D data through , and any existing 3D data will be overwritten with the newly reconstructed data.
Contains 2D frames, or 2D object information captured by each camera in a system. 2D data can be monitored from the pane.
3D data contains 3D coordinates of reconstructed markers. 3D markers get reconstructed from 2D data and shows up the perspective view. Each of their trajectories can be monitored in the . In recorded 3D data, marker labels can be assigned to reconstructed markers either through the process using asset definitions or by manually assigning it. From these labeled markers, Motive solves the position and orientation of Rigid Bodies and Skeletons.
Recorded 3D data is editable. Each frame of the trajectory can be deleted or modified. The post-processing can be used to interpolate the missing trajectory gaps or apply the smoothing, and the can be used to assign or reassign the marker labels.
Lastly, from a recorded 3D data, its tracking data can be into various file formats — CSV, C3D, FBX, and more.
can be used to fill the trajectory gaps.
Solved data is positional and rotational, 6 degrees of freedom (DoF), tracking data of and . After a take has been recorded, you will need either select Solve all Assets by right clicking on a Take in the , or right click on the asset in the pane and select Solve while in Edit mode. Takes that contain solved data will be indicated under the solved column.
Recorded , audio data, and reference videos can be deleted from a Take file. To do this, open the , right-click on a recorded Take(s), and click the Delete 2D Data from the context menu. Then, a dialogue window will pop-up, asking which types of data to delete. After removing the data, a backup file will be archived into a separate folder.
Deleting 2D data will significantly reduce the size of the Take file. You may want to delete recorded 2D data when there is already a final version of reconstructed 3D data recorded in a Take and the 2D data is no longer needed. However, be aware that deleting removes the most fundamental data from the Take file. After 2D data has been deleted, the action cannot be reverted, and without 2D data, 3D data cannot be again.
Recorded 3D data can be deleted from the context menu in the . To delete 3D data, right-click on selected Takes and click Delete 3D data, and all reconstructed 3D information will be removed from the Take. When you delete the 3D data, all edits and labeling will be deleted as well. Again, a new 3D data can always be reacquired by reconstructing and auto-labeling the Take from 2D data.
When multiple Takes are selected from the , deleting 3D data will remove 3D data from all of the selected Takes. This will remove 3D data from the entire frame ranges.
Assigned marker labels can be deleted from the context menu in the . The Delete Marker Labels feature removes all marker labels from the 3D data of selected Takes. All markers will become unlabeled.
Tracking data can be exported into the C3D file format. C3D (Coordinate 3D) is a binary file format that is widely used especially in biomechanics and motion study applications. Recorded data from external devices, such as force plates and NI-DAQ devices, will be recorded within exported C3D files. Note that common biomechanics applications use a Z-up right-hand coordinate system, whereas Motive uses a Y-up right-hand coordinate system. More details on coordinate systems are described in the later section. Find more about C3D files from .
Option | Description |
---|
Options | Descriptions |
---|
Option | Description |
---|
Option | Description |
---|
Labeling Pane in Motive The in Motive enables users to post-process tracking errors from recorded capture data. There are multiple editing methods available, and you need to clearly understand them in order to properly fix errors in captured trajectories. Tracking errors are sometimes inevitable due to the nature of marker-based motion capture systems. Thus, understanding the functionality of the editing tools is essential. Before getting into details, note that the post-editing of the motion capture data often takes a lot of time and effort. All captured frames must be examined precisely and corrections must be made for each error discovered. Furthermore, some of the editing tools implement mathematical modifications to marker trajectories, and these tools may introduce discrepancies if misused. For these reasons, we recommend so that tracking errors are prevented in the first place.
Refer to the and inspect gap percentages in each marker.
Look through the frames in the , and inspect the gaps in the trajectory.
In some cases, you may wish to delete 3D data for certain markers in a Take file. For example, you may wish to delete corrupt 3D reconstructions or trim out erroneous movements from the 3D data to improve the data quality. In the , reconstructed 3D markers can be deleted for selected range of frames. To delete a 3D marker, first select 3D markers that you wish to delete, and press the Delete key, and they will be completely erased from the 3D data. If you wish to delete 3D markers for a specific frame range, open the and select the frame ranges that you wish to delete the markers from, and press the Delete key. The 3D trajectory for the selected markers will be erased for the highlighted frame range.
Note: Deleted 3D data can be recovered by new 3D data from recorded 2D data.
2) Set the (also called as the view range) on the Graph View pane. All other frames outside of this range will be trimmed out. You can set the working range through the following approaches:
Specify the starting frame and ending frame from the navigation bar on the .
Highlight, or select, the desired frame range in the , and zoom into it using the zoom-to-fit hotkey (F) or the icon.
Set the working range from the by inputting start and end frames on the field.
The first step in the post-processing is to check for labeling errors. Labels can be lost or mislabeled to irrelevant markers either momentarily or entirely during capture. Especially when the marker placement is not optimized or when there are extraneous reflections, labeling errors may occur. As mentioned in other pages, marker labels are vital when tracking a set of markers, because each label affects how the overall set is represented. Examine through the recorded capture and spot the labeling errors from the perspective view, or by checking the trajectories on the for suspicious markers. Use the or the Tracks View mode from the to monitor unlabeled markers in the Take.
Read more about labeling markers from the page.
The provide functionality to modify and clean-up 3D trajectory data after a capture has been taken. multiple post-processing methods are featured in the Edit Tools for different purposes: Trim Tails, Fill Gaps, Smooth, and Swap Fix. The Trim Tails method is used to remove data points in few frames before and after a gap. The Fill Gaps method calculates the missing marker trajectory using interpolation methods. The Smoothing method filters out unwanted noise in the trajectory signal. Finally, the Swapping method switches marker labels for two selected markers. Remember that modifying data using Edit Tools changes the raw trajectories, and an overuse of Edit Tools is not recommended. Read through each method and familiarize yourself with the Editing Tools. Note that you can undo and redo all changes made using Edit Tools.
First of all, set the Max. Gap Size value and define the maximum frame length for an occlusion to be considered as a gap. If a gap size has a longer frame length, it will not be affected by the filling mechanism. Set a reasonable maximum gap size for the capture after looking through the occluded trajectories. In order to quickly navigate through the trajectory graphs on the for missing data, use the Find Gap features (Find Previous and Find Next) and automatically select a gap frame region so the data could be interpolated. Then, apply the Fill Gaps feature while the gap region is selected. Various interpolation options are available in the setting including Constant, Linear, Cubic, Pattern-based, and Model-based.
Examine the trajectory of the target marker from the : Size, range, and a number of gaps.
When interpolating for only a specific section of the capture, select the range of frames from .
In some cases, marker labels may be swapped during capture. Swapped labels can result in erratic orientation changes, or crooked Skeletons, but they can be corrected by re-labeling the markers. The Swap Fix feature in the can be used to correct obvious swaps that persist through the capture. Select two markers that have their labels swapped, and select the frame range that you wish to edit. Find Previous and Find Next buttons allow you to navigate to the frame where their position have been changed. If a frame range is not specified, the change will be applied from current frame forward. Finally, switch the marker labels by clicking on the Apply Swap button. As long as both labels are present in the frame and only correction needed is to change the labels, the Swap Fix tool could be utilized to make corrections.
Solved Data: After editing marker data in a recorded Take, corresponding must be updated.
is required to export Marker data, is required when exporting Markers labeled from Assets, and is required prior to exporting Assets.
Step 1. Open and select a Take to export from the . The selected Take must contain reconstructed 3D data.
Step 2. Under the File tab on the command bar, click File → Export Tracking Data. This can also be done by right-clicking on a selected Take from the and clicking Export Tracking Data from the context menu.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range. See: page.
Start and end frames of a working range can also be set from the when in the Edit mode.
Step 1. Under the , shift + select all the Takes that you wish to export.
Exporting multiple Take files with specific options can also be done through a script. For example, refer to FBXExporterScript.cs script found in the MotiveBatchProcessor folder.
Tracking Data Type | CSV | C3D | FBX | BVH | TRC |
---|
A calibration definition of a selected take can be exported from the Export Camera Calibration under the File tab. Exported calibration (CAL) files contain camera positions and orientations in 3D space, and they can be imported in different sessions to quickly load the calibration as long as the is maintained.
Read more about calibration files under the page.
Assets can be exported into the Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that contains various configuration settings in Motive, including the asset definitions.
C3D Export: Both mocap data and the analog data will be exported onto a same C3D file. Please note that all of the analog data within the exported C3D files will be logged at the same sampling frequency. If any of the devices are captured at different rates, Motive will automatically resample all of the analog devices to match the sampling rate of the fastest device. More on C3D files:
When there is an MJPEG reference camera in a Take, its recorded video can be exported into an AVI file or into a sequence of JPEG files. The Export Video option is located under the File tab or you can also right-click on a TAK file from the and export from there. At the bottom of the export dialog, the frame rate of the exported AVI file can be set to a full-frame rate or down-sampled to half, quarter, 1/8, or 1/16 ratio framerate. You can also adjust the playback speed to export a video with a slower or faster playback speed. The captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Read more about recording reference videos on page.
When a recorded capture contains audio data, an audio file can be exported through the Export Audio option on the File menu or by right-clicking on a Take from the .
Skeletal marker labels for Skeleton assets can be exported as XML files (example shown below) from the . The XML files can be imported again to use the stored marker labels when creating new Skeletons.
For more information on Skeleton XML files, read through the page.
When it comes to tracking human movements, a proper marker placement becomes especially important. Motive utilizes pre-programmed Skeleton Marker Sets, and each marker is used to indicate anatomical landmarks when modeling the Skeleton. Thus, all of the markers must be placed at their appropriate locations. If any of markers are misplaced, the Skeleton asset may not be created, and even if it is created, bad marker placements may lead to problems. Thus, taking extra care in placing the markers on intended locations is very important and can save time in post-processing of the data.
Open and go to the Skeleton creation feature. Select the Marker Set you desire to use from the drop-down menu. A total number of required markers for each Skeleton is indicated in the parenthesis after each Marker Set name, and corresponding marker locations are displayed over an avatar displayed in the . Instruct the subject to strike a calibration pose (T-pose or A-pose) and carefully follow the figure and place retroreflective markers at corresponding locations of the actor or the subject.
All markers need to be placed at respective anatomical locations of a selected Skeleton as shown in the . Skeleton markers can be divided into two categories: markers that are placed along joint axes (joint markers) and markers that are placed on body segments (segment markers).
Segment markers are markers that are placed on Skeleton body segments, but not around a joint. For best tracking results, each segment marker placement must be incongruent to an associated segment on the opposite side of the Skeleton (e.g., left thigh and right thigh). Also, segment markers must be placed asymmetrically within each segment for the best tracking results. This helps the Skeleton solve to thoroughly distinguish, left-side and right-side of the corresponding Skeleton segments throughout the capture. This asymmetrical placement is also emphasized in the avatars shown in the Builder pane. Segment markers that can be slightly moved to different places on the same segment are highlighted on the 3D avatar in the Skeleton creation window on the .
See also:
When using the biomechanics Marker Sets, markers must be placed precisely with extra care because these placements directly relate to coordinate system definition of each respective segment, affecting the resulting biomechanical analysis. The markers need to be placed on the skin for direct representation of the subject’s movement. Mocap suits are not suitable for biomechanic applications. While the basic marker placement must follow the avatar in the Builder pane, additional details on the accurate placements can be found on the following page: .
From the Skeleton creation options on the , select a Skeleton Marker Set template from the Template drop-down menu. This will bring up a Skeleton avatar displaying where the markers need to be placed on the subject.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best Skeleton tracking. Thus, extra attention is needed when placing the .
The magenta markers indicate the that can be placed at a slightly different position within the same segment.
Double-check the marker counts and their placements. It may be easier to use the in Motive to do this. The system should be tracking the attached markers at this point.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the Skeleton markers are not automatically detected, manually select the Skeleton markers from the .
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the Marker Set template. Or, you can also assign custom labels by loading previously prepared files in the label section.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created Skeleton will be calibrated from it. For more details, read the section.
If you are creating a Skeleton in the post-processing of captured data, you will have to the Take to see the Skeleton modeled and tracked in Motive.
By configuring , you can modify the display settings as well as Skeleton creation pose settings for Skeleton assets. For newly created Skeletons, default Skeleton creation properties are configured under the pane. Properties of existing, or recorded, Skeleton assets are configured under the while the respective Skeletons are selected in Motive.
The A-pose is another type of calibration pose that is used to create Skeletons. Set the Skeleton Create Pose setting to the A-pose you wish to calibrate with. This pose is especially beneficial for subjects who have restrictions in lifting the arm. Unlike the T-pose, arms are abducted at approximately 40 degrees from the midline of the body, creating an A-shape. There are three different types of A-pose: Palms down, palms forward, and elbows bent.
After creating a Skeleton from the , calibration markers need to be removed. First, detach the calibration markers from the subject. Then, in Motive, right-click on the Skeleton in the perspective view to access the context menu and click Skeleton → Remove Calibration Markers. Check the to make sure that the Skeleton no longer expects markers in the corresponding medial positions.
To recalibrate Skeletons, select all of the associated Skeleton markers from the perspective view and click Recalibrate From Markers which can be found in the Skeleton context menu from either the or the . When using this feature, select a Skeleton and the markers that are related to the corresponding asset.
Skeleton marker colors and marker sticks can be viewed in the pane. They provide color schemes for clearer identification of Skeleton segments and individual marker labels from the perspective viewport. To make them visible, enable the Marker Sticks and Marker Colors under the visual aids in the pane. A default color scheme is assigned when creating a Skeleton asset. To modify marker colors and labels, you can use the .
Constraints store information on marker labels, colors, and marker sticks which can be modified, exported and re-imported as needed. For more information on doing this, please refer to the page.
When adding, or removing, markers in the Edit mode, the Take needs to be again to re-label the Skeleton markers.
Access the Modify tab on the .
When you add extra markers to Skeletons, the markers will be labeled as Skeleton_CustomMarker#. You can use the to change the label as needed.
Enable selection of Marker Constraints from the visual aids option in .
Access the Modify tab on the .
Assets can be exported into the Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that contains various configuration settings in Motive, including the asset definitions.
There are two ways of obtaining Skeleton joint angles. Rough representations of joint angles can be obtained directly from Motive, but the most accurate representations of joint angles can be obtained by pipelining the tracking data into a third-party biomechanics analysis and visualization software (e.g. or ).
For biomechanics applications, joint angles must be computed accurately using the respective Skeleton model solve, which can be accomplished by using biomechanical analysis software. or stream tracking data from Motive and import into an analysis software for further calculation. From the analysis, various biomechanics metrics, including the joint angles can be obtained.
Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis. A rough representation of joint angles can be obtained by either exporting or streaming the Skeleton Rigid Body tracking data. When exporting the tracking data into CSV, set the export setting to Local to obtain bone segment position and orientation values in respect to its parental segment, roughly representing the joint angles by comparing two hierarchical coordinate systems. When streaming the data, set to true in the streaming settings to get relative joint angles.
Each Skeleton asset has its marker templates stored in an XML file. By exporting, customizing, and importing the constraint XML files, a Skeleton Marker Set can be modified. Specifically, customizing the XML files will allow you to modify Skeleton marker labels, marker colors, and marker sticks within a Skeleton asset. For detailed instructions on modifying Skeleton XML files, read through page.
To export a Skeleton XML file, right-click on a Skeleton asset under the Assets pane and use the feature to export corresponding Skeleton marker XML file.
You can import marker XML file under the Labels section of the when first creating a new Skeleton. To import a constraints XML file on an existing Skeleton, right-click on a Skeleton asset under the Assets pane and click Import Constraints.
This section provides detailed steps on how to create and use the measurement probe. Please make sure the camera volume has been successfully before creating the probe. System calibration is important on the accuracy of marker tracking, and it will directly affect the probe measurements.
Open the under and click Rigid Bodies.
Bring the probe out into the tracking volume and create a from the markers.
Once the probe is calibrated successfully, a probe asset will be displayed over the Rigid Body in Motive, and live x/y/z position data will be displayed under the .
Under the Tools tab, open the .
A Virtual Reference point is constructed at the location and the coordinates of the point are displayed in the . The points location can be as a .CSV file.
The location of the probe tip can also be streamed into another application in real-time. You can do this by the probe Rigid Body position via . Once calibrated, the pivot point of the Rigid Body gets positioned precisely at the tip of the probe. The location of a pivot point is represented by the corresponding Rigid Body x-y-z position, and it can be referenced to find out where the probe tip is located.
For more information, please visit site.
Autodesk has discontinued support for FBX ASCII import in MotionBuilder 2018 and above. For alternatives when working in MotionBuilder, please see the page.
Options | Description |
---|
Options | Descriptions |
---|
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle Selection in View
CTRL + left mouse click
Mean Ray Error
The Mean Ray Error reports a mean error value on how closely the tracked rays from each camera converged onto a 3D point with a given calibration. This represents the preciseness of the calculated 3D points during wanding. Acceptable value will vary depending on the size of the volume and camera count.
Mean Wand Error
The Mean Wand Error reports a mean error value of the detected wand length compared to the expected wand length throughout the wanding process.
Use Zero Based Frame Index | C3D specification defines first frame as index 1. Some applications import C3D files with first frame starting at index 0. Setting this option to true will add a start frame parameter with value zero in the data header. |
Export Unlabeled Markers | Includes unlabeled marker data in the exported C3D file. When set to False, the file will contain data for only labeled markers. |
Export Finger Tip Markers | Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking (e.g. Baseline + 11 Additional Markers + Fingers (54)) |
Use Timecode | Includes timecode. |
Rename Unlabeled As _000X | Unlabeled markers will have incrementing labels with numbers _000#. |
Marker Name Syntax | Choose whether the marker naming syntax uses ":" or "_" as the name separator. The name separator will be used to separate the asset name and the corresponding marker name in the exported data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel or MarkerLabel). |
Single Joint Torso | When this is set to true, there will be only one skeleton segment for the torso. When set to false, there will be extra joints on the torso, above the hip segment. |
Hands Downward | Sets the exported skeleton base pose to use hands facing downward. |
MotionBuilder Names | Sets the name of each skeletal segment according to the bone naming convention used in MotionBuilder. |
Skeleton Names | Set this to the name of the skeleton to be exported. |
Reconstructed 3D Marker Data | • | • | • | • |
6 Degrees of Freedom Rigid Body Data | • | • |
Skeleton Data | • | • | • |
CS-200:
Long arm: Positive z
Short arm: Positive x
Vertical offset: 19 mm
Marker size: 14 mm (diameter)
CS-400: Used for general for common mocap applications. Contains knobs for adjusting the balance as well as slots for aligning with a force plate.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 45 mm
Marker size: 19 mm (diameter)
Legacy L-frame square: Legacy calibration square designed before changing to the Right-hand coordinate system.
Long arm: Positive z
Short arm: Negative x
Custom Calibration square: Position three markers in your volume in the shape of a typical calibration square (creating a ~90 degree angle with one arm longer than the other). Then select the markers to set the ground plane.
Long arm: Positive z
Short arm: Negative x
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Sets the length units to use for exported data. |
Axis Convention | Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor. |
X Axis Y Axis Z Axis | Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set. |
Markers | Enabling this option includes X/Y/Z reconstructed 3D positions for each marker in exported CSV files. |
Unlabeled Markers | Enableing this option includes tracking data of all of the unlabeled makers to the exported CSV file along with other labeled markers. If you just want to view the labeled marker data, you can turn off this export setting. |
Rigid Bodies | When this option is set to true, exported CSV file will contain 6 Degree of Freedom (6 DoF) data for each rigid body from the Take. 6 DoF data contain orientations (pitch,roll, and yaw in the chosen rotation type as well as 3D positions (x,y,z) of the rigid body center. |
Rigid Body Markers | Enabling this option includes 3D position data for each Marker Constraints locations (not actual marker location) of rigid body assets. Compared to the positions of the raw marker positions included within the Markers columns, the Rigid Body Markers show the solved positions of the markers as affected by the rigid body tracking but not affected by occlusions. |
Bones | When this option is set to true, exported CSV files will include 6 DoF data for each bone segment of skeletons in exported Takes. 6 DoF data contain orientations (pitch, roll, and yaw) in the chosen rotation type, and also 3D positions (x,y,z) for the center of the bone. |
Bone Markers | Enabling this option will include 3D position data for each Marker Constraints locations (not actual marker location) of bone segments in skeleton assets. Compared to the real marker positions included within the Markers column, the Bone Markers show the solved positions of the markers as affected by the skeleton tracking but not affected by occlusions. |
Header information | Includes detailed information about capture data as a header in exported CSV files. Types of information included in the header section is listed in the following section. |
Rotation Type | Rotation type determines whether Quaternion or Euler Angles are used for orientation convention in exported CSV files. For Euler rotation, right-handed coordinate system is used and all different orders (XYZ, XZY, YXZ, YZX, ZXY, ZYX) of elemental rotation are available. More specifically, the XYZ order indicates pitch is degree about the X axis, yaw is degree about the Y axis, and roll is degree about the Z axis. |
Device Data | When set to True, separate CSV files for recorded device data will be exported. This includes force plate data and analog data from NI-DAQ devices. A CSV file will be exported for each device included in the Take. |
Use World Coordinates | This option decides whether exported data will be based on world (global) or local coordinate systems. |
1st row | General information about the Take and export settings. Included information are: format version of the CSV export, name of the TAK file, the captured frame rate, the export frame rate, capture start time, number of total frames, rotation type, length units, and coordinate space type. |
2nd row | Empty |
3rd row |
4th row | Includes marker or asset labels for each corresponding data set. |
5th row | Displays marker ID. |
6th and 7th row | Shows which data is included in the column: rotation or position and orientation on X/Y/Z. |
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Sets the length units to use for exported data. |
Axis Convention | Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor. |
X Axis Y Axis Z Axis | Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set. |
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Sets the length units to use for exported data. |
Axis Convention | Sets the axis convention on exported data. This can be set to a custom convention, or preset conventions for exporting to Motion Builder or Visual3D/Motion Monitor. |
X Axis Y Axis Z Axis | Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set. |
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Set the unit in exported files. |
Use Timecode | Includes timecode. |
Export FBX Actors | Includes FBX Actors in the exported file. Actor is a type of asset used in animation applications (e.g. MotionBuilder) to display imported motions and connect to a character. In order to animate exported actors, associated markers will need to be exported as well. |
Optical Marker Name Space | Overrides the default name spaces for the optical markers. |
Marker Name Separator | Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator. |
Markers | Export each marker coordinates. |
Unlabeled Markers | Includes unlabeled markers. |
Calculated Marker Positions | Export asset's constraint marker positions as the optical marker data. |
Interpolated Fingertips | Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking. |
Marker Nulls | Exports locations of each marker. |
Export Skeleton Nulls |
Rigid Body Nulls |
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Sets the unit for exported segment lengths. |
Use Timecode | Includes timecode. |
Export Skeletons |
Skeleton Names | Names of Skeletons that will be exported into the FBX binary file. |
Name Separator | Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator. |
Rigid Body Nulls |
Rigid Body Names | Names of the Rigid Bodies to export into the FBX binary file as 6 DoF nulls. |
Marker Nulls | Exports locations of each marker. |
It is heavily recommended that you use another audio capture software with timecode to capture and synchronize audio data. Audio capture in Motive is for reference only and is not intended to perfectly align to video or motion capture data.
Take scrubbing is not supported to align with audio recorded within Motive. If you would like the audio to be closely in reference to video and motion capture data, you must play the take from the beginning.
Recorded “Take” files with audio data will play back sound and may be exported into WAV audio files. This page details audio capture recommendations and instructions for recording and playing back audio in Motive.
Confirmed Devices
For the users who needs to use this feature, it's recommended to use one of the below devices that has been confirmed to work:
AT2020 USB microphone
mixPre-3
In Motive, open the Audio tab of the Settings window, then enable the “Capture” property.
Select the audio input device that you would like to use.
Make noise to confirm the microphone is working with the level visual.
Make sure the “Device Format” of the recording device matches the “Device Format” that will be used for playback (speakers and headsets).
Start capturing data.
In Motive, open a Take that includes audio data.
Open the Audio tab of the Settings window, then enable the “Playback” property.
Select the audio output device that you will be using.
Make sure the configurations in Device Format closely matches the Take Format.
Play the Take.
In order to playback audio recordings in Motive, the audio format of recorded data MUST closely match the audio format used by the output device. Specifically, the number of channels and frequency (Hz) of the audio must match. Otherwise, recorded sound will not be played back.
The recorded audio format is determined when a take is first recorded. The recorded data format and the playback format may not always agree by default. In this case, the windows audio settings will need to be adjusted to match the take.
Audio capture within Motive, does not natively synchronize to video or motion capture data and is intended for reference audio only. If you require synchronization, please use an external device and software with timecode. See below for suggestions for External Audio Capture.
A device's audio format can be configured under the Sound settings found in the Control Panel. To do this select the recording device, click on Properties, then the default format can be changed under the Advanced Tab as shown in the image below.
Recorded audio files can be exported into WAV format. To export, right-click on a Take from the Data pane and select Export Audio option in the context menu.
There are a variety of different programs and hardware that specialize in audio capture. A not very exhaustive list of examples can be seen below.
Tentacle Sync TRACK E
Adobe Premiere
Avid Media Composer
Etc...
In order to capture audio using a different program, you will need to connect both the motion capture system (through the eSync) and the audio capture device to timecode data (and possibly genlock data). You can then use the timecode information to synchronize the two sources of data for your end product.
For more information on synchronizing external devices, read through the Synchronization page.
The following devices are internally tested and should work for most use cases for reference audio only:
AT2020 USB
MixPre-3 II Digital USB Preamp
Hotkeys can be viewed and customized from the Application Settings panel. The below chart lists only the commonly used hotkeys. There are also other hotkeys and unassigned hotkeys, which are not included in the chart below. For a complete list of hotkey assignments, please check the Application Settings in Motive.
The Data Streaming settings can be found by selecting the Settings cog or by selecting Edit > Settings in the Motive Toolbar/Command Bar.
Motive offers multiple options to stream tracking data onto external applications in real-time. Streaming plugins are available for Autodesk Motion Builder, The MotionMonitor, Visual3D, Unreal Engine 4, 3ds Max, Maya (VCS), VRPN, and trackd, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom clients to receive capture data. All of the listed streaming options do not require separate licenses to use. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. This page covers configuring Motive to broadcast frame data over a selected server network. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
Read through the Application Settings page for explanations on each setting. NaturalPoint Data Streaming Forum: OptiTrack Data Streaming.
While streaming, the Labeled markers setting is required to be enabled for Unlabeled markers to stream. However, if you do not wish to see Unlabeled markers, these can be toggled off so only Labeled markers are streamed. Due to legacy properties, if Labeled is disabled, then both Labeled and Unlabeled markers are disabled even if Unlabeled is toggled on.
Select the network interface address for streaming data.
Select desired data types to stream under streaming options.
When streaming Skeletons, set the appropriate bone naming convention for client application.
Check Enable at the top under the NatNet settings.
Configure streaming settings and designate the corresponding IP address from client applications
Stream live or playback captures
It is important to select the network adapter (interface, IP Address) for streaming data. Most Motive Host PCs will have multiple network adapters - one for the camera network and one (or more) for the local area network (LAN). Motive will only stream over the selected adapter (interface). Select the desired interface using the Streaming tab in Motive's Settings. The interface can be either over a local area network (LAN) or on the same machine (localhost, local loopback). If both server (Motive) and client application are running on the same machine, set the network interface to the local loopback address (127.0.0.1). When streaming over a LAN, select the IP address of the network adapter connected to the LAN. This will be the same address the Client application will use to connect to Motive.
Firewall or anti-virus software can block network traffic, so it is important to make sure these applications are disabled or configured to allow access to both server (Motive) and Client applications.
Streamed Data Types
Before starting to broadcast data onto the selected network interface, define which data types to stream. Under streaming options, there are settings where you can include or exclude specific data types and syntax. Set only the necessary criteria to true. For most applications, the default settings will be appropriate.
See: Application Settings: Streaming
Unicast Subscription
New in Motive 3.0.
Starting from Motive version 3.0, unicast NatNet clients have the ability to subscribe only to desired data types that are being streamed out. This feature helps to minimize the size of the data packets and helps to reduce the streaming latency. This is especially beneficial for wireless unicast clients where streaming is more vulnerable to packet loss.
For more information on data subscription, please read the following page: NatNet: Unicast Data Subscription Commands
When streaming Skeleton data, bone naming convention formats annotations for each segment when data is streamed out. Appropriate convention should be configured to allow client application to properly recognize segments. For example, when streaming to Autodesk pipelines, the naming convention should be set to FBX.
Motive (1.7+) uses a right-handed Y-up coordinate system. However, coordinate systems used in client applications may not always agree with the convention used in Motive. In this case, the coordinate system in streamed data needs to be modified to a compatible convention. For client applications with a different ground plane definition, Up Axis can be changed under Advanced Network Settings. For compatibility with left-handed coordinate systems, the simplest method is to rotate the capture volume 180 degrees on the Y axis when defining the ground plane during Calibration.
NatNet is a client/server networking protocol which allows sending and receiving data across a network in real-time. It utilizes UDP along with either Unicast or Multicast communication for integrating and streaming reconstructed 3D data, Rigid Body data, and Skeleton data from OptiTrack systems to client applications. Within the API, a class for communicating with OptiTrack server applications is included for building client protocols. Using the tools provided in the NatNet API, capture data can be used in various application platforms. Please refer to the NatNet User Guide For more information on using NatNet and its API references.
Rotation conventions
NatNet streams rotational data in quaternions. If you wish to present the rotational data in the Euler convention (pitch-yaw-roll), the quaternions data need to be converted into Euler angles. In the provided NatNet SDK samples, the SampleClient3D application converts quaternion rotations into Euler rotations to display in the application interface. The sample algorithms for the conversion are scripted in the NATUtils.cpp file. Refer to the NATUtils.cpp file and the SampleClient3D.cpp file to find out how to convert quaternions into Euler conventions.
If desired, recording in Motive can control or be controlled by other remote applications via sending or receiving either NatNet commands or XML broadcast messages to or from a client application through the UDP communication protocol. This enables client applications to trigger Motive or vise versa. Using NatNet commands is recommended because they are not only more robust but they also offer additional control features.
Recording start and stop commands can also be transmitted via XML packets. When triggering via XML messages, the Remote Trigger setting under Advanced Network Settings must be set to true. In order for Motive, or clients, to receive the packets, the XML messages must be sent via the triggering UDP port. The triggering port is designated as two increments (2+) of the defined Command Port (default: 1510), under the advanced network settings, which defaults to 1512. Lastly, the XML messages must exactly follow the appropriate syntax:
XML Triggering Port: Command Port (Advanced Network Settings) + 2. This defaults to 1512 (1510 + 2).Tip: Within the NatNet SDK sample package, there is are simple applications (BroadcastSample.cpp (C++) and NatCap (C#)) that demonstrates a sample use of XML remote trigger in Motive.
XML syntax for the start / stop trigger packet
Capture Start Packet
Capture Stop Packet
Runs local or over network. The NatNet SDK includes multiple sample applications for C/C++, OpenGL, WinForms/.NET/C#, MATLAB, and Unity. It also includes a C/C++ sample showing how to decode Motive UDP packets directly without the use of client libraries (for cross platform clients such as Linux). For more information regarding NatNet SDK visit our wiki page NatNet SDK 4.0.
C/C++ or VB/C#/.NET or MATLAB
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs local or over network. Allows streaming both recorded data and real-time capture data for markers, Rigid Bodies, and Skeletons.
Comes with Motion Builder Resources: OptiTrack Optical Device OptiTrack Skeleton Device OptiTrack Insight VCS
Markers: Y Rigid Bodies: Y Skeletons: Y
Streams capture data into Autodesk Maya for using the Virtual Camera System.
Requirements:
Requires Motive 1.0+
Requires a license valid through March 2, 2018 (check your status)
Works with Maya 2011 (x86 and x64), 2014, 2015, 2016, 2017 and 2018
Markers: Y Rigid Bodies: Y Skeletons: Y
With a Visual3D license, you can download Visual3D server application which is used to connect OptiTrack server to Visual3D application. Using the plugin, Visual 3D receives streamed marker data to solve precise Skeleton models for biomechanics applications.
Markers: Y Rigid Bodies: N Skeletons: N C-Motion wiki: Visual3DServer Plugin
Runs local or over network. Supports Unreal Engine versions up to 5. This plugin allows streaming of Rigid Bodies, markers, Skeletons, and integration of HMD tracking within Unreal Engine projects. For more details, read through the OptiTrack Unreal Engine Plugin documentation page.
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs local or over network. This plugin allows streaming of tracking data and integration of HMD tracking within Unity projects. For more details, read through the OptiTrack Unity Plugin documentation page.
Markers: Y Rigid Bodies: Y Skeletons: Y
Runs Motive heedlessly. Best Motive command/control. Also provides access to camera imagery and other data elements not available in the other streams.
C/C++
Markers: Y Rigid Bodies: Y Skeletons: N
Within Motive
Runs local or over network.
Includes source code (C++) of a sample implementation for VRPN streaming. The Virtual-Reality Peripheral Network (VRPN) is an open source project containing a library and a set of servers that are designed for implementing a network interface between application programs and tracking devices used in a virtual-reality system.
Motive 3.0 uses VRPN version 7.33.1.
For more information: VRPN Github
Within Motive
Captured tracking data can be exported into a Track Row Column (TRC) file, which is a format used in various mocap applications. Exported TRC files can also be accessed from spreadsheet software (e.g., Excel). These files contain raw output data from the capture, which include positional data of each labeled and unlabeled marker from a selected Take. Expected marker locations and segment orientation data are not included in the exported files. The header contains basic information such as file name, frame rate, time, number of frames, and corresponding marker labels. Corresponding XYZ data is displayed in the remaining rows of the file.
This page covers different video modes that are available on the OptiTrack cameras. Depending on the video mode that a camera is configured to, captured frames are processed differently, and only the configured video mode will be recorded and saved in Take files.
Video types, or image-processing modes, available in OptiTrack Cameras
There are different video types, or image-processing modes, which could be used when capturing with OptiTrack cameras. Dending on the camera model, the available modes vary slightly. Each video mode processes captured frames differently at both camera hardware and software level. Furthermore, precision of the capture and required amount of CPU resources will vary depending on the configured video type.
The video types are categorized into either tracking modes (object mode and precision mode) and reference modes (MJPEG and raw grayscale). Only the cameras in the tracking modes will contribute to the reconstruction of 3D data.
To switch between video types, simply right-click on one of the cameras from the 2D camera preview pane and select the desired image processing mode under the video types.
Motive records frames of only the configured video types. Video types of the cameras cannot be switched for recorded Takes in post-processing of captured data.
(Tracking Mode) Object mode performs on-camera detection of centroid location, size, and roundness of the markers, and then, respective 2D object metrics are sent to the host PC. In general, this mode is best recommended for obtaining the 3D data. Compared to other processing modes, the Object mode provides smallest CPU footprint and, as a result, lowest processing latency can be achieved while maintaining the high accuracy. However, be aware that the 2D reflections are truncated into object metrics in this mode. The Object mode is beneficial for Prime Series and Flex 13 cameras when lowest latency is necessary or when the CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: Prime/PrimeX series, Flex 13, and S250e camera models.
(Tracking Mode) Precision Mode performs on-camera detection of marker reflections and their centroids. These centroid regions of interests are sent to the PC for additional processing and determination of the precise centroid location. This provides high-quality centroid locations but is very computationally expensive and is only recommended for low to moderate camera count systems for 3D tracking when the Object Mode is unavailable.
Supported Camera Models: Flex series, Tracking Bars, S250e, Slim13e, and Prime 13 series camera models.
(Reference Mode) The MJPEG -compressed grayscale mode captures grayscale frames, compressed on-camera for scalable reference video capabilities. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. The MJPEG mode can run at full frame rate and be synchronized with tracking cameras.
Supported Camera Models: All camera models
(Reference Mode) Processes full resolution, uncompressed, grayscale images. The grayscale mode is designed to be used only for reference purposes, and processed frames will not contribute to the reconstruction of 3D data. Because of the high bandwidth associated with sending raw grayscale frames, this mode is not fully synchronized with other tracking cameras and they will run at lower frame rate. Also, raw grayscale videos cannot be exported out from a recording. Use this video mode only for aiming and monitoring the camera views for diagnosing tracking problems.
Supported Camera Models: All camera models.
Open the Devices pane and Properties pane and select one or more cameras listed. Once the selection is made, respective camera properties will be shown on the properties pane. Current video type will be shown in the Video Mode section and you can change it using the drop-down menu.
From Perspective View
In the perspective view, right-click on a camera from the viewport and set the camera to the desired video mode.
From Cameras View
In the cameras view, right-click on a camera view and change the video type for the selected camera.
Compared to object images that are taken by non-reference cameras in the system, MJPEG videos are larger in data size, and recording reference video consumes more network bandwidth. High amount data traffic can increase the system latency or cause reductions in the system frame rate. For this reason, we recommend setting no more than one or two cameras to Reference mode. Reference views can be observed from either the Camera Preview pane or by selecting Video and selecting the camera that is in MJPEG mode from the Viewport dropdown.
If Greyscale mode is selected during a recording instead of MJPEG, no reference video will be recorded and the data from that camera will display a black screen. Full greyscale is strictly used for aiming and focusing cameras.
Note:
Processing latency can be monitored from the status bar located at the bottom.
MJPEG video are used only for reference purposes, and processed frames will not contribute to reconstruction of 3D data.
The video captured by reference cameras can be monitored from the viewport. To view the reference video, select the camera that you wish to monitor, and use the Num 3 hotkey to switch to the reference view. If the camera was calibrated and capturing reference videos, 3D assets will be overlaid on top of the reference image.
The Motive Batch Processor is a separate stand-alone Windows application, built on the new NMotive scripting and programming API, that can be utilized to process a set of Motive Take files via IronPython or C# scripts. While the Batch Processor includes some example script files, it is primarily designed to utilize user-authored scripts.
Initial functionality includes scripting access to file I/O, reconstructions, high-level Take processing using many of Motive's existing editing tools, and data export. Upcoming versions will provide access to track, channel, and frame-level information, for creating cleanup and labeling tools based on individual marker reconstruction data.
Motive Batch Processor Scripts make use of the NMotive .NET class library, and you can also utilize the NMotive classes to write .NET programs and IronPython scripts that run outside of this application. The NMotive assembly is installed in the Global Assembly Cache and also located in the assemblies
sub-directory of the Motive install directory. For example, the default location for the assembly included in the 64-bit Motive installer is:
C:\Program Files\OptiTrack\Motive\assemblies\x64
The full source code for the Motive Batch Processor is also installed with Motive, at:
C:\Program Files\OptiTrack\Motive\MotiveBatchProcessor\src
You are welcome to use the source code as a starting point to build your own applications on the NMotive framework.
Requirements
A batch processor script using the NMotive API. (C# or IronPython)
Take files that will be processed.
Steps
First, select and load a Batch Processor Script. Sample scripts for various pipelines can be found in the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder.
Load the captured Takes (TAK) that will be processed using the imported scripts.
Click Process Takes to batch process the Take files.
Reconstruction Pipeline
A class reference in Microsoft compiled HTML (.chm) format can be found in the Help
sub-directory of the Motive install directory. The default location for the help file (in the 64-bit Motive installer) is:
C:\Program Files\OptiTrack\Motive\Help\NMotiveAPI.chm
The Motive Batch Processor can run C# and IronPython scripts. Below is an overview of the C# script format, as well as an example script.
A valid Batch Processor C# script file must contain a single class implementing the ItakeProcessingScript
interface. This interface defines a single function:
Result ProcessTake( Take t, ProgressIndicator progress )
.
Result, Take, and ProgressIndicator
are all classes defined in the NMotive
namespace. The Take object t
is an instance of the NMotive Take
class. It is the take being processed. The progress
object is an instance of the NMotive ProgressIndicator
and allows the script to update the Batch Processor UI with progress and messages. The general format of a Batch Processor C# script is:
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are multiple C# (.cs) sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your C# script file must have a '.cs' extension.
Included sample script pipelines:
ExporterScript - BVH, C3D, CSV, FBXAscii, FBXBinary, TRC
TakeManipulation - AddMarker, DisableAssets, GapFill, MarkerFilterSCript, ReconstructAutoLabel, RemoveUnlabeledMarkers, RenameAsset
Your IronPython script file must import the clr module and reference the NMotive assembly. In addition, it must contain the following function:
def ProcessTake(Take t, ProgressIndicator progress)
The following illustrates a typical IronPython script format.
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your IronPython script file must have a '.cs' extension.
Reconstruction is a process of deriving 3D points from 2D coordinates obtained by captured camera images. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated on each captured frame and processed through the solver pipeline in order to be tracked. This process involves trajectorization of detected 3D markers within the calibrated capture volume and the booting process for the tracking of defined assets.
To oscillate between camera video types in Motive, click the camera video type icon under Mode in the Devices pane.
We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data.
When a frame of image is captured by a camera, the 2D camera filter is applied. This filter works by judging on the sizes and shapes of the detected reflections or IR illuminations, and it determines which ones can be accepted as markers. Please note that the camera filter settings can be configured in Live mode only because this filter is applied at the hardware level when the 2D frames are first captured. Thus, you will not be able to modify these settings on a recorded Take as the 2D data has already been filtered and saved; however, when needed, you can increase the threshold on the filtered 2D data and perform post-processing reconstruction to recalculate 3D data from the 2D data.
Min/Max Thresholded Pixels
The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.
For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.
Circularity
Object mode vs. Precision Mode
Tracked Ray (Green)
Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.
Untracked Ray (Red)
An untracked ray is a marker ray that fails to contribute to a reconstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.
Minimum Rays to Start / Minimum Rays to Continue
This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.
More Settings
Motive performs real-time reconstruction of 3D coordinates directly from either captured or recorded 2D data. When Motive is live-processing the data, you can examine the marker rays from the viewport, inspect the Live-Pipeline settings, and optimize the 3D data acquisition.
There are two modes where Motive is reconstructing 3D data in real-time:
Live mode (Live 2D data capture)
2D mode (Recorded 2D data)
The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When a capture is recorded in Motive, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the 3D data gets loaded first when a recorded Take file is opened.
Switching to 2D Mode
Applying changes to 3D data
Once the reconstruction/solver settings have been adjusted and optimized on recorded data, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.
The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.
Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.
Also, for Takes involving Skeleton assets, if the Skeletons are never in well-trackable poses throughout the captured Take, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Global: Defines the position and orientation in respect to the global coordinate system of the calibrated capture volume. The global coordinate system is the origin of the ground plane which was set with a calibration square during the process.
Local: Defines the bone segment position and orientation in respect to the coordinate system of the parent segment. Note that the hip of the skeleton is always the top-most parent of the segment hierarchy. Local coordinate axes can be set to visible from or in the . The Bone segment rotation values in the Local coordinate space can be used to roughly represent the joint angles, however, for precise analysis, joint angles should be computed through a biomechanical analysis software using the exported capture data (C3D).
Displays which data type is listed in each corresponding column. Data types include raw marker, Rigid Body, Rigid Body marker, bone, bone marker, or unlabeled marker. Read more about .
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Can only be exported when is recorded for exported Skeleton assets. Exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Can only be exported when is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Export Skeleton nulls. Please note that the must be recorded for Skeleton bone tracking data to be exported. It exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Can only be exported when is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Function | Default Hotkey |
---|---|
To quickly access the streaming settings, click on the streaming icon () from the control deck. This will open the streaming tab in the application settings panel.
Value | Description |
---|---|
Value | Description |
---|---|
You can check and/or switch video types of a selected camera from either the camera properties, viewports. Also, you toggle the camera(s) between tracking mode and reference mode in the Device pane by clicking on the Mode button ( / ). If you want to use all of the cameras for tracking, make sure all of the cameras are in the Tracking mode.
Cameras can also be set to record reference videos during capture. When using MJPEG mode, these videos are synchronized with other captured frames, and they are used to observe what goes on during recorded capture. To record the reference video, switch the camera into a MJPEG mode by toggling on the camera mode in the Devices pane.
Launch the Motive Batch Processor. It can be launched from either the start menu, Motive install directory, or from the in Motive.
When running the reconstruction pipeline in the batch processor, the reconstruction settings must be loaded using the ImportMotiveProfile method. From Motive, export out the and make sure it includes the reconstruction settings. Then, import this user profile file into the Batch Processor script before running the reconstruction, or trajectorizer, pipeline so that proper settings can be used for reconstructing the 3D data. For more information, refer to the sample scripts located in the TakeManipulation folder.
is an implementation of the Python programming language that can use the .NET libraries and Python libraries. The batch processor can execute valid IronPython scripts in addition to C# scripts.
This page provides an explanation on some of the settings that affect how the 3D tracking data is obtained. Most of the related settings can be found under the Live Pipeline tab in the . A basic understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. With that being said, we do not recommend changing these settings as the default settings should work well for most tracking applications.
For real-time tracking in Live mode, the settings for this pipeline can be configured from the Live-Pipeline tab in the . For post-processing recorded files in Edit mode, the solver settings can be accessed under corresponding . Note that optimal configurations may vary depending on capture applications and environmental conditions, but for most common applications, default settings should work well.
In this page, we will focus on the and the , which are the key settings that have direct effects on the reconstruction outcome.
Camera settings can be configured under the . In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. For this reason, the camera lens must be focused on the tracking volume, and the settings should be configured so that the markers are clearly visible in each camera view. Thus, the camera settings, such as camera exposure and IR intensity values, must always be checked and optimized in each setup. The following sections highlight additional settings that are directly related to 3D reconstruction.
Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See page for more information.
The THR setting is located in the in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.
To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the .
The under application settings control the tracking quality in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main stages before getting reconstructed into 3D tracking. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.
Enable Marker Size under the visual aids () in the viewport to inspect which reflections are accepted, or omitted, by the size filter.
In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.
Enable Marker Circularity under the visual aids in the viewport to inspect which reflections are accepted, or omitted, by the circularity filter.
The and deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In precision mode, cameras send the pixel data that would have been used by object mode to Motive for processing. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about .
After the 2D camera filter has been applied, each of the 2D centroids captured by each camera forms a marker ray, which is basically a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume; from each calibrated camera. When a minimum required number of rays, as defined in the ) converge and intersect within the allowable maximum offset distance (defined by settings) trajectorization of a 3D marker occurs. Trajectorization is a process of using 2D data to calculate respective 3D marker trajectories in Motive.
Monitoring marker rays is an efficient way of inspecting reconstruction outcomes. The rays show up by default, but if not, they can be enabled for viewing under the visual aids options under the toolbar in . There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.
Motive processes markers rays with the camera to reconstruct respective markers, and the solver settings determine how 2D data gets trajectories and solved into 3D data for tracking the Rigid Bodies and/or Skeletons. The solver not only tracks from the marker rays but additionally utilizes pre-defined asset definitions to provide high-quality tracking. The default solver settings work for most tracking applications, and the users should not need to modify these settings. With that being said, some of the basic settings which can be modified are summarized below.
The Live Pipeline settings doesn't have to be modified for most tracking applications. There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the page or refer to the corresponding tooltips.
In the , Motive is Live processing the data from captured 2D frames to obtain 3D tracking data in real-time, and you can inspect and monitor the marker rays from the . Any changes to the Live Pipeline (Solver/Camera) settings under the will be reflected immediately in the Live mode.
Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once recording has been made. You can still, however, view and use the recorded 2D data to optimize the solver parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the .
In 2D Mode, Motive is reconstructing in real-time from recorded 2D data; using the reconstruction/solver settings that were configured in the at the time of recording; Settings are saved under the properties of the corresponding TAK file. Please note that reconstruction/solver settings from the get applied for post-processing, instead of the settings from the panel. When in 2D Mode while editing a TAK file, any changes to the reconstruction/solver settings under TAK properties will be reflected in how the 3D reconstructions are solved, in real-time.
Under the , click to access the menu options and check the 2D Mode option.
Performing post-processing reconstruction. To perform post-processing reconstruction, open the , select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.
Camera Filter Settings In Edit mode, 2D camera filters can still be modified from the tracking group properties in the . Modified filter settings will change which markers in the recorded 2D data gets processed through the Live Pipeline engine.
Solver/Reconstruction Settings When you perform post-processing reconstruction on a recorded Take(s), a new set of 3D data will be reconstructed from the filtered 2D camera data. In this step, the solver settings defined under corresponding Take properties in the will be used. Note that the reconstruction properties under the are for the Live capture systems only.
Reconstruct and Auto-label, will additionally apply the pipeline on the obtained 3D data and label any markers that associate with existing asset (Rigid Body or Skeleton) definitions. The auto-labeling pipeline will be explained more on the page.
Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the or in the . When nothing is selected, reconstruction will be applied to all frames.
Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the .
CS-100: Used to define a ground plane in a small, precise motion capture volumes.
Long arm: Positive z
Short arm: Positive x
Vertical offset: 11.5 mm
Marker size: 9.5 mm (diameter)
File
Open File (TTP, CAL, TAK, TRA, SKL)
CTRL + O
Save Current Take
CTRL + S
Save Current Take As
CTRL + Shift + S
Export Tracking Data from current (or selected) TAKs
CTRL + Shift + Alt + S
Basic
Toggle Between Live/Edit Mode
Shift + ~
Record Start / Playback start
Space Bar
Select All
CTRL + A
Undo
Ctrl + Z
Redo
Ctrl + Y
Cut
Ctrl + X
Paste
Ctrl + V
Layout
Calibrate Layout
Ctrl+1
Create Layout
Ctrl+2
Capture Layout
Ctrl+3
Edit Layout
Ctrl+4
Custom Layout [1...]
Ctrl+[5...9], Shift[1...9]
Perspective View Pane (3D)
Switch selected viewport to 3D perspective view.
1
Switch selected viewport to 2D camera view.
2
Show view angle from a selected camera or a Rigid Body
3
Open single viewport
Shift + 1
Open two viewports; splited horizontally.
Shift + 2
Open two viewports; splited vertically.
Shift + 3
Open four viewports.
Shift + 4
Perspective View Pane (3D)
Follow Selected
G
Zoom to Fit Selection
F
Zoom to Fit All
Shift + F
Reset Tracking
Crtl+R
View/hide Tracked Rays
"
View/hide Untracked Rays
Shift + "
Jog Timeline
Alt + Left Click
Create Rigid Body From Selected
Ctrl+T
Refresh Skeleton Asset
Ctrl + R with a Skeleton asset selected
Enable/Disable Asset Editing
T
Toggle Labeling Mode
D
Select Mode
Q
Translation Mode
W
Rotation Mode
E
Scale Mode
R
Camera Preview (2D)
Video Modes
U: Grayscale Mode
I: MJPEG Mode
O: Object Mode
Data Management Pane
Remove or Delete Session Folders
Delete
Remove Selected Take
Delete
paste shots as empty take from clipboard
Ctrl+V
Timeline / Graph View
Toggle Live/Edit Mode
~
Again+
+
Live Mode: Record
Space
Edit Mode: Start/stop playback
Space
Rewind (Jump to the first frame)
Ctrl + Shift + Left Arrow
PageTimeBackward (Ten Frames)
Down Arrow
StepTimeBackward (One Frame)
Left Arrow
StepTimeForward (One Frame)
Right Arrow
PageTimeForward (Ten Frames)
Up Arrow
FastForward (Jump to the last frame)
Ctrl + Shift + Right Arrow
To next gapped frames
Z
To previous gapped frames
Shift + Z
Graph View - Delete Selected Keys in 3D data
Delete when frame range is selected
Show All
Shift + F
Frame To Selected
F
Zoom to Fit All
Shift + F
Editing / Labeling Workflow
Apply smoothing to selected trajectory
X
Apply cubic fit to the gapped trajectory
C
Toggle Labeling Mode
D
To next gapped frame
Z
To previous gapped frame
Shift + Z
Enable/Disable Asset Editing
T
Select Mode
Q
Translation Mode
W
Rotation Mode
E
Scale Mode
R
Delete selected keys
DELETE
Name
Name of the Take that will be recorded.
SessionName
Name of the session folder.
Notes
Informational note for describing the recorded Take.
Description
(Reserved)
Assets
List of assets involved in the Take.
DatabasePath
The file directory where the recorded captures will be saved.
Start Timecode
Timecode values (SMTPE) for frame alignments, or reserving future record trigger events for timecode supported systems. Camera systems usually have higher framerates compared to the SMPTE Timecode. In the triggering packets, the subframe values always equal to 0 at the trigger.
PacketID
(Reserved)
HostName
(Reserved)
ProcessID
(Reserved)
Name
Name of the recorded Take.
Notes
Informational notes for describing recorded a Take.
Assets
List of assets involved in the Take
Timecode
Timecode values (SMPTE) for frame alignments. The subframe value is zero.
HostName
(Reserved)
ProcessID
(Reserved)