Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The new Continuous Calibration feature ensures your system always remains optimally calibrated, requiring no user intervention to maintain the tracking quality. It uses highly sophisticated algorithms to evaluate the quality of the calibration and the triangulated marker positions. Whenever the tracking accuracy degrades, Motive will automatically detect and update the calibration and provide the most globally optimized tracking system.
Ease of use. This feature provides much easier user experience because the capture volume will not have to be re-calibrated as often, which will save a lot of time. You can simply enable this feature and have Motive maintain the calibration quality.
Optimal tracking quality. Always maintains the best tracking solution for live camera systems. This ensures that your captured sessions retain the highest quality calibration. If the system receives inadequate information from the environment, the calibration with not update and your system never degrades based on sporadic or spurious data. A moderate increase in the number of real optical tracking markers in the volume and an increase in camera overlap improves the likelihood of a higher quality update.
Works with all camera types. Continuous calibration works with all OptiTrack camera models, including the V120 Tracking bars, the Flex series camera systems, and the Prime series camera systems as well as the Slim13E camera systems for active marker tracking.
For continuous calibration to work as expected, the following criteria must be met:
Markers Must Be Tracked. Continuous calibration looks at tracked reconstructions to assess and update the calibration. Therefore, at least some number of markers must be tracked within the volume.
Majority of Cameras Must See Markers. A majority of cameras in a volume needs to receive some tracking data within a portion of their field of view in order to initiate the calibration process. Because of this, traditional perimeter camera systems typically work the best. Each camera should additionally see at least 4 markers for optimal calibration.
There are two different modes of continuous calibration: Continuous and Continuous + Bumped.
The Continuous mode is used to maintain the calibration quality, and this should be utilized in most cases. In this mode, Motive monitors how well the tracked rays converge onto tracked markers, and it updates the calibration so corresponding tracked rays converge more precisely. This mode is capable of correcting minor degradations that result from ambient influences, such as the thermal expansions on the camera mounting structure.
This mode requires markers to be seen by all of the cameras in the system in order for the calibration to be updated.
The Continuous + Bumped mode combines the continuous calibration refinement described above with the ability to resolve and repair cameras that have been bumped and are no longer contributing to 3D reconstruction. By utilizing this feature, the bumped camera will automatically resolve and be reintroduced into the calibration without requiring the user to perform a manual calibration. For just maintaining overall calibration quality, the Continuous mode should be used instead of the Continuous + Bumped mode.
The continuous calibration can be enabled or disabled in the Application Settings pane under the reconstruction tab. Set the Continuous Calibration setting to Continuous, or Continuous + Bumped to allow the feature to update the system calibration.
The status of continuous calibration can be monitored on the Status Log panel.
Under the Application Settings -> Reconstruction tab, set the continuous calibration to Continuous.
Once enabled, Motive continuously monitors the residual values in captured marker reconstructions. When the residual value increases, Motive will start sampling data for continuous calibration.
Make sure at least some number of markers are being tracked in the volume.
When a sufficient number of samples have been collected, Motive updates the calibration.
When successfully updated, the result will be notified on the Status Log pane.
Duo/Trio Tracking Bars: Duo/ Trio tracking bars can utilize this feature to update the calibration of tracking bars to improve tracking quality.
When a camera is bumped and its orientation have been shifted greatly, the affected camera will no longer properly contribute to the tracking. As a result, there will be a lot of untracked rays generated by this camera.
Under the Application Settings -> Reconstruction tab, set the continuous calibration to Continuous + Bumped Camera.
Make sure there are one or more 3D reconstructed markers in motion within the field of view of the bumped camera.
When a sufficient number of samples have been collected, Motive updates the calibration and the bumped camera will be corrected and the system calibration will be updated.
Check the masking from the 2D Camera Previews. The masks may not be properly placed over the extraneous reflections due to the updated calibration. If so, simply re-mask the extraneous reflections. See: Masking
(Optional) If needed, export the updated calibration into a CAL file.
Do not use continuous calibration for updating calibration with cameras that have been moved significantly or repositioned entirely. While this feature may be able to handle such cases, this is not the intended use. When a camera is moved, you will need to manually calibrate the volume again for the best tracking quality.
Anchor markers can be set up in Motive to further improve continuous calibration. When properly configured, anchor markers improve continuous calibration updates, especially on systems that consist of multiple sets of cameras that are separated into different tracking areas, by obstructions or walls, without camera view overlap. It also provides extra assurance that the global origin will not shift during each update; although the continuous calibration feature itself already checks for this.
Follow the below steps for setting up the active anchor marker in Motive:
Adding Anchor Markers in Motive
First, make sure the entire camera volume is fully calibrated and prepared for marker tracking.
Place any number of markers in the volume to assign them as the anchor markers.
Make sure these markers are securely fixed in place within the volume. It's important that the distances between these markers do not change throughout the continuous calibration updates.
In the 3D viewport, select the markers that are going to be assigned as anchors.
Right-click on the marker to bring up the context menu. Then go to Anchor Markers → Add Selected Markers.
Once markers are added as anchor markers, magenta spheres will appear around the markers indicating the anchors have been set.
Add more anchors as needed, again, it's important that these anchor markers do not move throughout the tracking. Also when the anchor markers need to be reset, whether if the marker was displaced, you can clear the anchor markers and reassign them.
OptiTrack motion capture systems can use both passive and active markers as indicators for 3D position and orientation. An appropriate marker setup is essential for both tracking the quality and reliability of captured data. All markers must be properly placed and must remain securely attached to surfaces throughout the capture. If any markers are taken off or moved, they will become unlabeled from the Marker Set and will stop contributing to the tracking of the attached object. In addition to marker placements, marker counts and specifications (sizes, circularity, and reflectivity) also influence the tracking quality. Passive (retroreflective) markers need to have well-maintained retroreflective surfaces in order to fully reflect the IR light back to the camera. Active (LED) markers must be properly configured and synchronized with the system.
OptiTrack cameras track any surfaces covered with retroreflective material, which is designed to reflect incoming light back to its source. IR light emitted from the camera is reflected by passive markers and detected by the camera’s sensor. Then, the captured reflections are used to calculate the 2D marker position, which is used by Motive to compute 3D position through reconstruction. Depending on which markers are used (size, shape, etc.) you may want to adjust the camera filter parameters from the Live Pipeline settings in Application Settings.
The size of markers affects visibility. Larger markers stand out in the camera view and can be tracked at longer distances, but they are less suitable for tracking fine movements or small objects. In contrast, smaller markers are beneficial for precise tracking (e.g. facial tracking and microvolume tracking), but have difficulty being tracked at long distances or in restricted settings and are more likely to be occluded during capture. Choose appropriate marker sizes to optimize the tracking for different applications.
If you wish to track non-spherical retroreflective surfaces, lower the Circularity value in 2D object filter in the application settings. This adjusts the circle filter threshold and non-circular reflections can also be considered as markers. However, keep in mind that this will lower the filtering threshold for extraneous reflections as well. If you wish to track non-spherical retroreflective surfaces, lower the Circularity value from the cameras tab in the application settings.
All markers need to have a well-maintained retroreflective surface. Every marker must satisfy the brightness Threshold defined from the camera properties to be recognized in Motive. Worn markers with damaged retroreflective surfaces will appear to a dimmer image in the camera view, and the tracking may be limited.
Pixel Inspector: You can analyze the brightness of pixels in each camera view by using the pixel inspector, which can be enabled from the Application Settings.
Please contact our Sales team to decide which markers will suit your needs.
OptiTrack cameras can track any surface covered with retro-reflective material. For best results, markers should be completely spherical with a smooth and clean surface. Hemispherical or flat markers (e.g. retro-reflective tape on a flat surface) can be tracked effectively from straight on, but when viewed from an angle, they will produce a less accurate centroid calculation. Hence, non-spherical markers will have a less trackable range of motion when compared to tracking fully spherical markers.
OptiTrack's active solution provides advanced tracking of IR LED markers to accomplish the best tracking results. This allows each marker to be labeled individually. Please refer to the Active Marker Tracking page for more information.
Active (LED) markers can also be tracked with OptiTrack cameras when properly configured. We recommend using OptiTrack’s Ultra Wide Angle 850nm LEDs for active LED tracking applications. If third-party LEDs are used, their illumination wavelength should be at 850nm for best results. Otherwise, light from the LED will be filtered by the band-pass filter.
If your application requires tracking LEDs outside of the 850nm wavelength, the OptiTrack camera should not be equipped with the 850nm band-pass filter, as it will cut off any illumination above or below the 850nm wavelength. An alternative solution is to use the 700nm short-pass filter (for passing illumination in the visible spectrum) and the 800nm long-pass filter (for passing illumination in the IR spectrum). If the camera is not equipped with the filter, the Filter Switcher add-on is available for purchase at our webstore. There are also other important considerations when incorporating active markers in Motive:
Place a spherical diffuser around each LED marker to increase the illumination angle. This will improve the tracking since bare LED bulbs have limited illumination angles due to their narrow beamwidth. Even with wide-angle LEDs, the lighting coverage of bare LED bulbs will be insufficient for the cameras to track the markers at an angle.
If an LED-based marker system will be strobed (to increase range, offset groups of LEDs, etc.), it is important to synchronize their strobes with the camera system. If you require a LED synchronization solution, please contact one of our Sales Engineers to learn more about OptiTrack’s RF-based LED synchronizer.
Many applications that require active LEDs for tracking (e.g. very large setups with long distances from a camera to a marker) will also require active LEDs during calibration to ensure sufficient overlap in-camera samples during the wanding process. We recommend using OptiTrack’s Wireless Active LED Calibration Wand for best results in these types of applications. Please contact one of our Sales Engineers to order this calibration accessory.
Proper marker placement is vital for quality of motion capture data because each marker on a tracked subject is used as indicators for both position and orientation. When an asset (a Rigid Body or Skeleton) is created in Motive, its unique spatial relationships of the markers are calibrated and recorded. Then, the recorded information is used to recognize the markers in the corresponding asset during the auto-labeling process. For best tracking results, when multiple subjects with a similar shape are involved in the capture, it is necessary to offset their marker placements to introduce the asymmetry and avoid the congruency.
Read more about marker placements from the Rigid Body Tracking page and the Skeleton Tracking page.
Asymmetry
Asymmetry is the key to avoiding the congruency for tracking multiple Marker Sets. When there are more than one similar marker arrangements in the volume, marker labels may be confused. Thus, it is beneficial to place segment makers — joint markers must always be placed on anatomical landmarks — in asymmetrical positions for similar Rigid Bodies and Skeletal segments. This provides a clear distinction between two similar arrangements. Furthermore, avoid placing markers in a symmetrical shape within the segment as well. For example, a perfect square marker arrangement will have ambiguous orientation and frequent mislabels may occur throughout the capture. Instead, follow the rule of thumb of placing the less critical markers in asymmetrical arrangements.
Prepare the markers and attach them on the subject, a Rigid Body or a person. Minimize extraneous reflections by covering shiny surfaces with non-reflective tapes. Then, securely attach the markers to the subject using enough adhesives suitable for the surface. There are various types of adhesives and marker bases available on our webstore for attaching the marker: Acrylic, Rubber, Skin adhesive, and Velcro. Multiple types of marker bases are also available: carbon fiber filled bases, Velcro bases, and snap-on plastic bases.
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
To install Motive, you must first download the Motive installer from our website. Follow the Downloads link under the Support page (http://optitrack.com/downloads/), and you will be able to find the newest version of Motive or the previous releases if needed. Both Motive: Body and Motive: Tracker shares same software installer.
1. Run the Installer
When the download is complete, run the installer to initiate the installation process.
2. Install the USB Driver and Dependencies
If you are installing Motive for the first time, it will prompt you to install the OptiTrack USB Driver. This driver is required for all OptiTrack USB devices including the Hardware Key. You may also need to install other dependencies such as the C++ redistributable and DirectX. After all dependencies have been installed, continue onto installing the Motive.
It is important to install the specific versions required by Motive 2.3.x, even if newer versions are installed.
3. Install Motive
Follow the installation prompts and install Motive in your desired file directory. We recommend installing the software in the default directory, C:\Program File\OptiTrack\Motive
.
4. OptiTrack Peripheral Module
At the Custom Setup section of the installation process, you will be asked to choose whether to install the Peripheral Devices along with Motive. If you plan to use force plate, NI-DAQ, or EMG devices along with motion capture systems, then make sure the Peripheral Device is installed. If you are not going to be using these devices, you may skip to the next step.
Peripheral Module NI-DAQ
If you decided to install the Peripheral Device, then you will be prompted to install OptiTrack Peripherals Module along with NI-DAQmx driver at the end of Motive installation. Press Yes to install the plugins and the NI-DAQmx driver. This may take a few minutes to install. This only needs to be done one time.
5. Finish Installation
After you have completed all steps above Motive will be installed. If you want to use additional plugins, visit the downloads page
Firewall / Anti-Virus
Make sure all anti-virus software on the Host PC is allowing Motive.
For Ethernet cameras, make sure the windows' firewall is configured to allow the camera network to be recognized. Disabling them entirely is another option.
High-Performance
Windows power saving mode limits CPU usage. In order to best utilize Motive, set this mode to the High Performance mode and remove the limitations. You can configure the High Performance Mode from Control Panel → Hardware and Sound → Power Options
as shown in the image below.
Graphics Card Settings
This is only for computers with integrated graphics.
For computers with integrated graphics, please make sure Motive is set to run on the dedicated graphics card. If the host computer has integrated graphics on the CPU, the PC may switch to using integrated graphics when the computer goes to sleep mode, and when this happens, the viewport may go unresponsive when it exits out of the sleep mode. If you have integrated graphics on the computer, go to the Graphics Settings on Windows, and browse Motive to set it as high-performance graphics.
Once you have installed Motive, the next step is to activate the software using the provided license information and a USB Security Key. Motive activation requires a valid Motive 3.0 license, a USB Security Key, and a computer with access to the Internet.
For Motive 2.x, a USB Hardware Key is required to use the camera system. The Hardware Key stores licensing information and allows you to use a single license to perform different tasks using different computers. Hardware keys are purchased separately. For more information, please see the following page:
There are five different types of Motive licenses: Motive:Body-Unlimited, Motive:Body, Motive:Tracker, Motive:Edit-Unlimited, and Motive:Edit. Each license unlocks different features in the software depending on the use case that the license is intended to facilitate.
The Motive:Body and Motive:Body-Unlimited licenses are intended for either small (up to 3) or large-scale Skeleton tracking applications.
The Motive:Tracker license is intended for real-time Rigid Body tracking applications.
The Motive:Edit and Motive:Edit Unlimited licenses are intended for users modifying data after it has been captured already
For more information on different types of Motive licenses, check the software comparison table on our website or in the table below.
Step 1. Launch Motive
First, launch Motive.
Step 2. Activate
The Motive splash screen will pop up and it will indicate that the license is not found. Click and open the license tool and fill out the following fields using provided license information. You will need the License Serial Number and License Hash from your order invoice and the Hardware Key Serial Number indicated on the USB security key or the hardware key. Once you have entered all the information, click Activate. If you have already activated the license before on another machine, make sure the same name is entered when activating.
Online Activation Tool
The Motive License can also be activated from online using the Online License Activation tool. When you use the online License Activation Tool, you will receive the license file via email. In this case, you will have to place the file in the license folder. Once the license file is placed, insert the corresponding USB Hardware Key to use Motive.
Step 3. License File
If Motive is activated properly, license files will be placed in the license folder. This folder can be accessed from the splash screen or by navigating to Start Menu → All Programs → OptiTrack → Motive → OptiTrack License Folder
.
License Folder: C:\ProgramData\OptiTrack\License
Step 4. Hardware Key
If not already done, insert the corresponding Hardware Key that was used to activate the license. The matching security key must be connected to the computer in order to use Motive.
Notes on Connecting the Hardware Key
Connect the Hardware Key to a USB port where the bus does not have a lot of traffic. This is important especially if you have other peripheral devices that connect to the computer via USB ports. If there is too much data flowing through the USB bus used by the Hardware Key, Motive might not be able to connect the cameras.
Make sure USB Hardware Key is plugged in all the way.
About Motive
You can also check the status of the activated license from the About Motive pop-up. This can be accessed in the splash screen when it fails to detect a valid license, or it can be accessed from the Help
``→``
About Motive
menu in Motive.
License Data:
In this panel, you can also export license data into a TXT file by clicking on the License Data.... If you are having any issues with activating Motive, please export and attach the following file in the email.
OptiTrack software can be used on a new computer by reactivating the license, using the same license information. When reactivating, make sure to enter the same name information as before. After the license has been reactivated, the corresponding USB Hardware Key needs to be inserted into the PC in order to verify and run the software.
Another method of using the license is by copying the license file from the old computer to the new computer. The license file can be found in the OptiTrack License folder which can be accessed through the Motive Splash Screen or top Help menu in Motive.
For more information on licensing of Motive, refer to the Licensing FAQs from the OptiTrack website:
For more questions, contact our Support:
When contacting support, please attach the license data (TXT) file exported from the About Motive panel as a reference.
Before diving into specific details, let’s begin with a brief overview of Motive. If you are new to using Motive, we recommend you to read through this page and learn about the basic tools, configurations and navigation controls, as well as instructions on managing capture files.
In Motive, the recorded mocap data is stored in a file format called Take (TAK), and multiple Take files can be grouped within a session folder. The Data pane is the primary interface for managing capture files in Motive. This pane can be accessed from the icon on the main Toolbar, and it contains a list of session folders and the corresponding Take files that are recorded or loaded in Motive.
Motive will save and load Motive-specific file formats including the Take files (TAK), camera calibration files (CAL), and Motive user profiles (MOTIVE) that can contain most of the software settings as well as asset definitions for Skeletons and Rigid Body objects. Asset definitions are related to trackable objects in Motive which will be explained further in the Rigid Body Tracking and Skeleton Tracking page.
Motive file management is centered on the Take (TAK) file. A TAK file is a single motion capture recording (aka 'take' or 'trial'), which contains all the information necessary to recreate the entire capture from the file, including camera calibration, camera 2D data, reconstructed and labeled 3D data, data edits, solved joint angle data, tracking models (Skeletons, Rigid Bodies), and any additional device data (audio, force plate, etc). A Motive Take (TAK) file is a completely self-contained motion capture recording, and it can be opened by another copy of Motive on another system.
Note:
Take files are forward compatible, but not backwards compatible
BAK files:
If you have any old recordings from Motive 1.7 or below, with BAK file extension, please import these recordings into Motive 2.0 version first and re-save them into TAK file format in order to use it in Motive version 3.0 or above.
A Session is a file folder that allows the user to organize multiple similar takes (e.g. Monday, Tuesday, Wednesday, or StaticTrials, WalkingTrials, RunningTrials, etc). Whether you are planning the day's shoot or incorporating a group of Takes mid-project, creating session folders can help manage complex sets of data. In the Data pane, you can import session folders that contain multiple Takes or create a new folder to start a new capture session. For a most efficient workflow, plan the mocap session before the capture and organize a list of captures (shots) that need to be completed. Type Take names in a spreadsheet or a text file, and Copy and paste the list, which will automatically create empty Takes (shot list) with corresponding names from the pasted list.
Software configurations are saved onto the motive profile (*.motive) files. In the motive profile, all of the application-related configurations, lists of assets, and the loaded session folders are saved and preserved. You can export and import the profiles to easily maintain the same software configurations each time Motive is launched.
All of the currently configured software settings will get saved onto the C:\ProgramData\OptiTrack\MotiveProfile.motive
file periodically throughout capture and when closing out of Motive. This file is the default application profile, and it gets loaded back when Motive is launched again. This allows all of the configurations to be persisted in between different sessions of Motive. If you wish to revert all of the settings to its factory default, use the Reset Application Settings button under the Edit tab of the main command bar.
Motive profiles can also be exported and imported from the File menu of the main command bar. Using the profiles, you can easily transfer and persist Motive configurations among different instances and different computers.
The followings are saved on application profile:
Application Settings
Live Pipeline Settings
Streaming Settings
Synchronization Settings
Export Settings
Rigid Body & Skeleton assets
Rigid Body & Skeleton settings
Labeling settings
Hotkey configurations
A calibration file is a standalone file that contains all of the required information to completely restore a calibrated camera volume, including positions and orientations of each camera, lens distortion parameters, and the camera settings. After a camera system is calibrated, CAL file can be exported and imported back again onto Motive when needed. Thus, it is recommended to save out the camera calibration file after each round of calibration.
Please note that reconstruction settings also get stored in the calibration file; just like how it gets stored in the MOTIVE profile. If the calibration file is imported after the profile file was loaded, it may overwrite the previous reconstruction settings as it gets imported.
Note that this file is reliable only if the camera setup has remained unchanged since the calibration. Read more from Calibration page.
The followings are saved on application profile:
Reconstruction settings
Camera settings
Position and orientation of the cameras
Location of the global origin
Lens distortion of each camera
Default System Calibration
The default system calibration gets saved onto the C:\ProgramData\OptiTrack\Motive\System Calibration.cal
file, and it gets loaded automatically at application startup to provide instant access to the 3D volume. This file also gets updated each time calibration is modified or when closing out of Motive.
In Motive, the main viewport is fixed at the center of the UI and is used for monitoring the 2D or 3D capture data in both live capture and playback of recorded data. The viewport can be set to either perspective view or camera view. The Perspective View mode shows the reconstructed 3D data within the calibrated 3D space, and the Camera View mode shows 2D images from each camera in the setup. These modes can be selected from the drop-down menu at the top-right corner, and both of these views are essential for assessing and monitoring the tracking data.
Use the dropdown menu at the top-left corner to switch into the Perspective View mode. You can also use the number 1 hotkey while on a viewport.
Used to look through the reconstructed 3D representation of the capture, analyze marker positions, rays used in reconstruction, etc.
The context menu in the Perspective View allows you to access more options related to the markers and assets in 3D tracking data.
Use the dropdown menu at the top-left corner to switch into the Camera View mode. You can also use the number 2 hotkey while on a viewport.
Each camera’s view can be accessed from the Camera Preview pane. It displays the images that are being transmitted from each camera. The image processing modes are displayed, including grayscale and object.
Detected IR lights and/or reflections are also shown in this pane. Only the IR lights that satisfy the object filters get considered as markers.
From the Camera Preview pane, you can mask certain pixel regions to exclude them from the process.
When needed, the viewport can be split into 4 different smaller views. This can be selected from the menu at the top-right corner of the viewport. You can use the hotkeys (Shift + 4) to do this also.
Most of the navigation controls in Motive are customizable, including both mouse and Hotkey controls. The Hotkey Editor Pane and the Mouse Control Pane under the Edit tab allow you to customize mouse navigation and keyboard shortcuts to common operations.
Mouse controls in Motive can be customized from the application settings panel to match your preference. Motive also includes a variety of common mouse control presets so that any new users can easily start controlling Motive. Available preset control profiles include Motive, Blade, Maya, and Visual3D. The following table shows a few basics actions that are commonly used for navigating the viewports in Motive.
Using the Hotkeys can speed up workflows. Most of the default hotkeys are listed on the Motive Hotkeys page. When needed, the hotkeys can also be customized from the application settings panel which can be accessed under the Edit tab. Various actions can be assigned with a custom hotkey using the Hotkey Editor.
The Control Deck is always docked at the bottom of Motive, and it provides both recording and navigation controls over Motive's two primary operating modes: Live mode and Edit mode.
In the Live Mode, all cameras are active and the system is processing camera data. If the mocap system is already calibrated, Motive is live-reconstructing 2D camera data into labeled and unlabeled 3D trajectories (markers) in real-time. The live tracking data can be streamed to other applications using the data streaming tools or the NatNet SDK. Also, in Live mode, the system is ready for recording and corresponding capture controls will be available in the Control Deck.
In the Edit Mode, the cameras are not active, and Motive is processing loaded Take file (pre-recorded data). The playback controls will be available in the control deck, and the small timeline will appear at the top of the control deck for scrubbing through the recorded frames. In this mode, you can review the recorded 3D data from the TAK and make post-processing edits and/or manually assign marker labels to the recorded trajectories before exporting out the tracking data. Also, when needed, you can switch to the 2D mode, and view the real-time reconstructed 3D data to understand how the 3D data was obtained and perform post-processing reconstruction pipeline to re-obtain a new set of 3D data.
Hotkeys: "Shift + ~" is the default hotkey for toggling between Live and Edit modes in Motive.
The Graph View pane is used for plotting live or recorded channel data in Motive. For example, 3D coordinates of the reconstructed markers, 3D positions and orientations of Rigid Body assets, force plate data, analog data from data acquisition devices, and more can be plotted on this pane. You can switch between existing layouts or create a custom layout for plotting specific channel data.
Basic navigation controls are highlighted below. For more information, read through the Graph View pane page.
Navigate Frames (Alt + Left-click + Drag)
Alt + left-click on the graph and drag the mouse left and right to navigate through the recorded frames. You can do the same with the mouse scroll as well.
Panning (Scroll-click + Drag)
Scroll-click and drag to pan the view vertically and horizontally throughout plotted graphs. Dragging the cursor left and right will pan the view along the horizontal axis for all of the graphs. When navigating vertically, scroll-click on a graph and drag up and down to pan vertically for the specific graph.
Zooming (Right-click + Drag)
Other Ways to Zoom:
Press "Shift + F" to zoom out to the entire frame range.
Zoom into a frame range by Alt + right-clicking on the graph and selecting the specific frame range to zoom into.
When a frame range is selected, press "F" to quickly zoom onto the selected range in the timeline.
Selecting Frame Range (Left-click + Drag)
The frame range selection is used when making post-processing edits on specific ranges of the recorded frames. Select a specific range by left-clicking and dragging the mouse left and right, and the selected frame ranges will be highlighted in yellow. You can also select more than one frame ranges by shift-selecting multiple ranges.
Navigate Frames (Left-click)
Left-click and drag on the nav bar to scrub through the recorded frames. You can do the same with the mouse scroll as well.
Pan View Range
Scroll-click and drag to pan the view range range.
Frame Range Zoom
Zoom into a frame range by re-sizing the scope range using the navigation bar handles. You can also easily do this by Alt + right-clicking on the graph and selecting a specific range to zoom into.
Working Range / Playback range
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range, recorded tracking data will be played back and shown on the graphs. This range can also be used to output a specific frame ranges when exporting tracking data from Motive.
The working range can be set from different places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Selection Range
The selection range is used to apply post-processing edits only onto a specific frame range of a Take. Selected frame range will be highlighted in yellow on both Graph View pane as well as Timeline pane.
Gap indication
When playing back a recorded capture, the red colors on the navigation bar indicate the amount of occlusions from labeled markers. Brighter red means that there are more markers with labeling gaps.
This pane is used for configuring application-wide settings, which include startup configurations, display options for both 2D and 3D viewports, settings for asset creation, and most importantly, live-pipeline parameters for the Solver and the 2D Filter settings for the cameras. The Cameras tab includes the 2D filter settings that basically determine which reflections gets considered as marker reflections on the camera views, and the Solver setting determines which 3D markers get reconstructed in the scene from a group of marker reflections from all of the cameras. References for the available settings are documented in the Application Settings page.
If you wish to reset the default application setting, go to Reset Application Settings under the Edit tab.
Solver Settings
Under the Solver tab, you can configure a real-time solver engine. These settings, including the trajectorizer settings, are one of the most important settings in Motive. These settings determine how 3D coordinates are acquired from the captured 2D camera images and how they are used for tracking Rigid Bodies and Skeletons. Thus, understanding these settings is very important for optimizing the system for the best tracking results.
Camera Settings
Under the Camera tab, you can configure the 2D Camera filter settings (circularity filter and size filter) as well as other display options for the cameras. The 2D Camera filter setting is one of the key settings for optimizing the capture. For most applications, the default settings work well, but it is still beneficial to understand some of the core settings in order for more efficient control over the camera system.
For more information, read through the Application Settings: Live Pipeline page and the Reconstruction and 2D Mode
The UI layout in Motive is customizable. All panes can be docked and undocked from the UI. Each pane can be positioned and organized by drag-and-drop using the on-screen docking indicators. Panes may float, dock, or stack. When stacked together, they form a tabbed window for quickly cycling through. Layouts in Motive can be saved and loaded, allowing a user to switch quickly between default and custom configurations suitable for different needs. Motive has preset layouts for Calibration, Creating a Skeleton, Capturing (Record), and Editing workflows. Custom layouts can be created, saved, and set as default from the Main Menu -> 'Layout' menu item. Quickly restore a particular layout from the Layout menu, the Layout Dropdown at the top right of the Main Menu, or via HotKeys.
Note: Layout configurations from Motive versions older than 2.0 cannot be loaded in latest versions of Motive. Please re-create and update the layouts for use.
This page covers basic types of trackable assets in Motive. The assets in Motive are used for both tracking of the objects and labeling of 3D markers in Motive, and they are managed under the Assets pane which can be opened by clicking on the icon. Each type of asset is further explained in the related pages.
Once Motive is prepared, the next step is to place markers on the subject and create corresponding assets. There are three different types of assets in Motive:
Marker Set
Rigid Body
Skeleton
For each Take, involved assets are displayed in the Assets pane, and the related properties show up at the Properties pane when an asset is selected within Motive.
The Marker Set is a list of marker labels that are used to annotate reconstructed markers. Marker Sets should only be used in situations where it is not possible to define a Rigid Body or Skeleton. In this case, the user will manually label markers in post-processing. When doing so, having a defined set of labels (Marker Set) makes this process much easier. Marker Sets within a Take will be listed in the Labels pane, and each label can be assigned through the Labeling process.
Rigid body and Skeleton assets are the Tracking Models. Rigid bodies are created for tracking rigid objects, and Skeleton assets are created for tracking human motions. These assets automatically apply a set of predefined labels to reconstructed trajectories using Motive's tracking and labeling algorithms, and Motive uses the labeled markers to calculate the position and orientation of the Rigid Body or Skeleton Segment. Both Rigid Body and Skeleton tracking data can be sent to other pipelines (e.g. animations and biomechanics) for extended applications. If new Skeletons or Rigid Bodies are created during post-processing, the take will need to be reconstructed and auto-labeled in order to apply the changes to the 3D data.
Assets may be created during both Live (before capture) or Post (after capture, from a loaded TAK) captures.
The Assets pane lists out all assets that are available in the current capture. You can easily copy these assets onto other recorded Take(s) or to the live capture by doing the following:
Copying Assets to a Recorded _Take_
In order to copy and paste assets onto another Take, right-click on the desired Take to bring up the context menu and choose Copy Assets to Takes. This will bring up a dialog window for selecting which assets to move.
Copying Assets to Multiple Recorded _Take(s)_
If you wish to copy assets to multiple Takes, select multiple takes from the Data pane until the desired takes are all highlighted. Repeat the steps you took above for copying a single Take by right-clicking on any of the selected Takes. This should copy the assets you selected to all the selected Takes in the Data pane.
Copying Assets from a Recorded _Take_** to the Live Capture**
If you have a list of assets in a Take that you wish to import into the live capture, you can simply do this by right-clicking on the desired assets on the Assets pane, and selecting Copy Assets to Live.
For selecting multiple items, use Shift-click or Ctrl-click.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
Tracking data can be exported into the C3D file format. C3D (Coordinate 3D) is a binary file format that is widely used especially in biomechanics and motion study applications. Recorded data from external devices, such as force plates and NI-DAQ devices, will be recorded within exported C3D files. Note that common biomechanics applications use a Z-up right-hand coordinate system, whereas Motive uses a Y-up right-hand coordinate system. More details on coordinate systems are described in the later section. Find more about C3D files from .
General Export Options
Option | Description |
---|
C3D Specific Export Options
Options | Descriptions |
---|
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
MotionBuilder Compatible Axis Convention
This is a preset convention for exporting C3D files for use in Autodesk MotionBuilder. Even though Motive and MotionBuilder both use the same coordinate system, MotionBuilder assumes biomechanics standards when importing C3D files (negative X axis to positive X axis; positive Z to positive Y; positive Z to positive Y). Accordingly, when exporting C3D files for MotionBuilder use, set the Axis setting to MotionBuilder Compatible, and the axes will be exported using the following convention:
Motive: X axis → Set to negative X → Mobu: X axis
Motive: Y axis → Set to positive Z → Mobu: Y axis
Motive: Z axis → Set to positive Y → Mobu: Z axis
There is an known behavior where importing C3D data with timecode doesn't accurately show up in MotionBuilder. This happens because MotionBuilder sets the subframe counts in the timecode using the playback rate inside MotionBuilder instead of using the rate of the timecode. When this happens you can set the playback rate in MotionBuilder to be the same as the rate of the timecode generator (e.g. 30 Hz) to get correct timecode. This happens only with C3D import in MotionBuilder, FBX import will work fine without the change to the playback rate.
In Motive, Skeleton assets are used for tracking human motions. These assets auto-label specific sets of markers attached to human subjects, or actors, and create skeletal models. Unlike Rigid Body assets, Skeleton assets require additional calculations to correctly identify and label 3D reconstructed markers on multiple semi-Rigid Body segments. In order to accomplish this, Motive uses pre-defined Skeleton Marker Set templates, which is a collection of marker labels and their specific positions on a subject. According to the selected Marker Set, retroreflective markers must be placed on pre-designated locations of the body. This page details instructions on how to create and use Skeleton assets in Motive.
Example display settings in skeleton assetsNote:
Motive license: Skeleton features are supported only in Motive:Body or Motive:Body - Unlimited.
Skeleton Count: Standard Motive:Body license supports up to 3 Skeletons. For tracking higher number of Skeletons, activate with Motive: Body - Unlimitted license.
Height requirement: For Skeleton tracking, the subject must be between 1'7" ~ 9' 10" tall.
Use the default create layout to open related panels that are necessary for Skeleton creation. (CTRL + 2).
When it comes to tracking human movements, a proper marker placement becomes especially important. Motive utilizes pre-programmed Skeleton Marker Sets, and each marker is used to indicate anatomical landmarks when modeling the Skeleton. Thus, all of the markers must be placed at their appropriate locations. If any of markers are misplaced, the Skeleton asset may not be created, and even if it is created, bad marker placements may lead to problems. Thus, taking extra care in placing the markers on intended locations is very important and can save time in post-processing of the data.
Attaching markers directly onto a person’s skin can be difficult because of hairs, oils, and moistures from sweat. Plus, dynamic human motions tend to move the markers during capture, so use appropriate skin adhesives for securing marker bases onto the skin. Alternatively, mocap suits allow velcro marker bases to be used.
Open and go to the Skeleton creation feature. Select the Marker Set you desire to use from the drop-down menu. A total number of required markers for each Skeleton is indicated in the parenthesis after each Marker Set name, and corresponding marker locations are displayed over an avatar that shows up in the . Instruct the subject to strike a calibration pose (T-pose or A-pose) and carefully follow the figure and place retroreflective markers at corresponding locations of the actor or the subject.
Joint Markers
Joint markers need to be placed carefully along corresponding joint axes. Proper placements will minimize marker movements during a range of motions and will give better tracking results. To accomplish this, ask the subject to flex and extend the joint (e.g. knee) a few times and palpate the joint to locate the corresponding axis. Once the axis is located, attach the markers along the axis where skin movement is minimal during a range of motion.
Wipe out any moisture or oil on the skin before attaching the marker.
Avoid wearing clothing or shoes with reflective materials since they can introduce extraneous reflections.
Tie up hair which can occlude the markers around the neck.
Remove reflective jewelry.
Place markers in an asymmetrical arrangement by offsetting the related segment markers (markers that are not on joints) in slightly different height.
Additional Tips
All markers need to be placed at the respective anatomical landmarks.
Place markers where you can palpate the bone or where there are less soft tissues in between. These spots have fewer skin movements and provide secure marker attachment.
Joint markers are vulnerable to skin movements because of the range of motion in the flexion and extension cycle. In order to minimize the influence, a thorough understanding of the biomechanical model used in the post-processing is necessary. In certain circumstances, the joint line may not be the most appropriate location. Instead, placing the markers slightly superior to the joint line could minimize soft tissue artifact, still taking care to maintain parallelism with the anatomical joint line.
Use appropriate adhesives to place each markers and make sure they are securely attached.
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Next step is to select the Skeleton creation pose settings. Under the Pose section drop-down menu, select the desired calibration post you want to use for defining the Skeleton. This is set to the T-pose by default.
Step 7.
Step 8.
Click Create to create the Skeleton. Once the Skeleton model has been defined, confirm all Skeleton segments and assigned markers are located at expected locations. If any of the Skeleton segment seems to be misaligned, delete and create the Skeleton again after adjusting the marker placements and the calibration pose.
In Edit Mode
Virtual Reality Markersets
A proper calibration posture is necessary because the pose of the created Skeleton will be calibrated from it. Read through the following explanations on proper T-poses and A-poses.
T pose
The T-pose is commonly used as the reference pose in 3D animation to bind two characters or assets together. Motive uses this pose when creating Skeletons. A proper T-pose requires straight posture with back straight and head looking directly forward. Both arms are stretched to each side, forming a “T” shape. Both arms and legs must be straight, and both feet need to be aligned parallel to each other.
A pose
Palms Down: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing downwards.
Palms forward: Arms straight. Abducted, sideways, arms approximately 40 degrees, palms facing forward. Be careful not to over rotate the arm.
Elbows Bent: Similar to all other A-poses. arms approximately 40 degrees, bend elbows so that forearms point towards the front. Palms facing downwards, both forearms aligned.
Calibration markers exists only in the biomechanics Marker Sets.
Many Skeleton Marker Sets do not have medial markers because they can easily collide with other body parts or interfere with the range of motion, all of which increase the chance of marker occlusions.
However, medial markers are beneficial for precisely locating joint axes by associating two markers on the medial and lateral side of a joint. For this reason, some biomechanics Marker Sets use medial markers as calibration markers. Calibration markers are used only when creating Skeletons but removed afterward for the actual capture. These calibration markers are highlighted in red from the 3D view when a Skeleton is first created.
Existing Skeleton assets can be recalibrated using the existing Skeleton information. Basically, the recalibration recreates the selected Skeleton using the same Skeleton Marker Set. This feature recalibrates the Skeleton asset and refreshes expected marker locations on the assets.
Skeleton recalibration do not work with Skeleton templates with added markers.
Skeleton Marker Sets can be modified slightly by adding or removing markers to or from the template. Follow the below steps for adding/removing markers. Note that modifying, especially removing, Skeleton markers is not recommended since changes to default templates may negatively affect the Skeleton tracking when done incorrectly. Removing too many markers may result in poor Skeleton reconstructions while adding too many markers may lead to labeling swaps. If any modification is necessary, try to keep the changes minimal.
To Add
Select a Skeleton segment that you wish to add extra markers onto.
Then, CTRL + left-click on an the marker that we wish to add to the template
On the Asset Model Markers tool in the Builder pane, click + to add and associate the selected marker to the selected segment
Reconstruct and Auto-label the Take.
To Remove
[Optional] Under the advanced properties of the target Skeleton, enable Marker Lines property to view which markers are associated with different Skeleton bones.
Delete the association by clicking on the "-" next to the Asset Model Markers tool in the Builder pane while both the target marker and the target segment is selected.
Reconstruct and Auto-label the Take.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
For biomechanics applications, joint angles must be computed accurately using respective Skeleton model solve, which can be accomplished by using biomechanical analysis software. Export C3D files or stream tracking data from Motive and import them into an analysis software for further calculation. From the analysis, various biomechanics metrics, including the joint angles can be obtained.
To export Skeleton constraints XML file
To import Skeleton constraints XML file
Once the capture volume is calibrated and all markers are placed, you are now ready to capture Takes. In this page, we will cover key concepts and tips that are important for the recording pipeline. For real-time tracking applications, you can skip this page and read through the page.
There are two different modes in Motive: Live mode and Edit mode. You can toggle between two modes from the or by using the (Shift + ~) hotkey.
Live Mode
The Live mode is mainly used when recording new Takes or when streaming a live capture. In this mode, all of the cameras are continuously capturing 2D images and reconstructing the detected reflections into 3D data in real-time.
Edit Mode
The Edit Mode is used for playback of captured Take files. In this mode, you can playback, or stream, recorded data. Also, captured Takes can be post-processed by fixing mislabeling errors or interpolating the occluded trajectories if needed.
Tip: For Skeleton tracking, always start and end the capture with a T-pose or A-pose, so that the Skeleton assets can be redefined from the recorded data as well.
Tip: Efficient ways of managing Takes
Always start by creating session folders for organizing related Takes. (e.g. name of the tracked subject).
Plan ahead and create a list of captures in a text file or a spreadsheet, and you can create empty takes by copying and pasting the list into the Data Management pane (e.g. walk, jog, run, jump).
Once pasted, empty Takes with the corresponding names will be imported.
Select one of the empty takes and start recording. The capture will be saved with the corresponding name.
When captured successfully, select another empty Take in the list and capture the next one.
2D data: The recorded Take file includes just the 2D object images from each camera.
3D data: The recorded Take file also includes reconstructed 3D marker data in addition to 2D data.
Marker data, labeled or unlabeled, represent the 3D positions of markers. These markers do not present Rigid Body or Skeleton solver calculations but locate the actual marker position calculated from the camera data. These markers are represented as a solid sphere in the viewport. By default, unlabeled markers are colored in white, and labeled markers will have colors that reflect the color setting in the Rigid Body or the corresponding bone.
Labeled Marker Colors:
Colors of the Rigid Body labeled markers can be changed from the properties of the corresponding asset.
Colors of the markers can be changed from the Constraints XML file if needed.
Rigid body markers or bone markers are expected marker positions. They appear as transparent spheres within a rigid body, or a skeleton, and they reflect the position that the rigid body or skeleton solver expects to find a corresponding reconstructed marker. Calculating these positions assumes that the marker is fixed on a rigid segment that doesn’t deform over the course of capture. When the rigid body solver or skeleton solver are correctly tracking reconstructed markers, both marker reconstructions, and expected marker positions will have similar position values and will closely align in the viewport.
When creating rigid bodies, their associated markers will appear as a network of lines between markers. Skeleton marker expected positions would be located next to body segments, or bones. Please see Figure 2. If the marker placement is distorted during capture, the actual marker position will deviate from the expected position. Eventually, the marker may become unlabeled. Figure 1. shows how actual and expected marker positions could align or deviate from each other. Due to the nature of marker-based mocap systems, labeling errors may occur during capture. Thus, understanding each marker type in Motive is very important for correct interpretation of the data.
This page provides some information on aligning a Rigid Body pivot point with a real object replicated 3D model.
When using streamed Rigid Body data to animate a real-life replicate 3D model, the alignment of the pivot point is necessary. In other words, the location of the Rigid Body pivot coincides with the location of the pivot point in the corresponding 3D model. If they are not aligned accurately, the animated motion will not be in a 1:1 ratio compared to the actual motion. This alignment is commonly needed for real-time VR applications where real-life objects are 3D modeled and animated in the scene. The suggested approaches for aligning these pivot points will be discussed on this page.
There are two methods for doing this. Using a measurement probe to sample 3D points to reference from, or simply using a reference grayscale view to align. The first method of creating and using a measurement probe is most accurate and recommended.
Step 1. Create a Rigid Body of the target object
First of all, create a Rigid Body from the markers on the target object. By default, the pivot point of the Rigid Body will be positioned at the geometrical center of the marker placement. Then place the object onto somewhere stable where it will stay stationary.
Step 2. Create a measurement probe.
Step 3. Collect data points to outline the silhouette
Step 4. Attach 3D model
From the sampled 3D points, You can also export markers created from the probe to Maya or other content creation packages to generate models guaranteed to scale correctly.
Step 5. Translate the pivot point
Step 6. Copy transformation values
Step 7. Zero all transformation values in the Attached Geometry section
Once the Rigid Body pivot point has been moved using the Builder pane, zero all of the transformation configurations under the Attached Geometry property for the Rigid Body.
This page provides instructions on how to utilize the Gizmo tool for modifying asset definitions (Rigid Bodies and Skeletons) on the of Motive
The gizmo tools allow users to make modifications on reconstructed 3D markers, Rigid Bodies, or Skeletons for both real-time and post-processing of tracking data. This page provides instructions on how to utilize the gizmo tools.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Precise Position/Orientation: When translating or rotating the Rigid Body, you can CTRL + select a 3D reconstruction from the scene to precisely position the pivot point, or align a coordinate axis, directly on, or towards, the selected marker. Multiple reconstructions can be also be selected and their geometrical center (midpoint) will be used as the target reference.
You can utilize the gizmo tools to modify skeleton bone lengths, joint orientations, or scale the spacing of the markers. Translating and rotating the skeleton assets will change how skeleton bone is positioned and oriented with respect to the tracked markers, and thus, any changes in the skeleton definition will affect the realistic representation of the human movement.
The scale tool modifies the size of selected skeleton segments.
The gizmo tools can also be used to edit positions of reconstructed markers.In order to do this, you must be working reconstructed 3D data in post-processing. In live-tracking or 2D mode doing live-reconstruction, marker positions are reconstructed frame-by-frame and it cannot be modified. The Edit Assets must be disabled to do this (Hotkey: T).
Translate
Using the translate tool, 3D positions of reconstructed markers can be modified. Simply click on the markers, turn on the translate tool (Hotkey: W), and move the markers.
Rotate
Using the rotate tool, 3D positions of a group of markers can be rotated at its center. Simply select a group of markers, turn on the rotate tool (Hotkey: E), and rotate them.\
Scale
Using the scale tool, 3D spacing of a group of makers can be scaled. Simply select a group of markers, turn on the scale tool (Hotkey: R) and scale their spacing.
Cameras can be modified using the gizmo tool if the Settings Window > General > Calibration > "Editable in 3D View" property is enabled. Without this property turned on the gizmo tool will not activate when a camera is selected to avoid accidentally changing a calibration. The process for using the gizmo tool to fix a misaligned camera is as follows:
Select the camera you wish to fix, then view from that camera (Hotkey: 3).
Select either the Translate or Rotate gizmo tool (Hotkey: W or E).
Use the red diamond visual to align the unlabeled rays roughly onto their associated markers.
Right lock then choose "Correct Camera Position/Orientation". This will perform a calculation to place the camera more accurately.
Turn on Continuous Calibration if not already done. Continuous calibration should finish aligning the camera into the correct location.
This page explains different types of captured data in Motive. Understanding these types is essential in order to fully utilize the data-processing pipelines in Motive.
There are three different types of data: 2D data, 3D data, and Solved data. Each type of data will be covered in detail throughout this page, but basically, 2D Data is the captured camera frame data, 3D Data is the reconstructed 3-dimensional marker data, and Solved data is the calculated positions and orientations of and segments.
Motive saves tracking data into a Take file (TAK extension), and when a capture is initially recorded, all of the 2D data, real-time reconstructed 3D data, and solved data are saved onto a Take file. Recorded 3D data can be post-processed further in , and when needed, a new set of 3D data can be re-obtained from saved 2D data by performing the reconstruction pipelines. From the 3D data, Solved data can be derived.
Available data types are listed on the . When you open up a Take in , the loaded data type will be highlighted at the top-left corner of the 3D viewport. If available, 3D Data will be loaded first by default, and the 2D data can be accessed by entering the from the Data pane.
2D data is the foundation of motion capture data. It mainly includes the 2D frames captured by each camera in a system.
Recorded 2D data can be reconstructed and auto-labeled to derive the 3D data.
3D tracking data is not computed yet. The tracking data can be exported only after reconstructing the 3D data.
In playback of recorded 2D data, 3D data will be Live-reconstructed into 3D data and reported in the 3D viewport.
Reconstructed 3D marker positions.
Marker labels can be assigned.
Assets are modeled and the tracking information is available.
Record Solved Data
Deleting 3D data for a single _Take_
When frame range is not selected, it will delete 3D data from the entire frame. When a frame range is selected from the Timeline Editor, this will delete 3D data in the selected ranges only.
Deleting 3D data for multiple _Takes_
When a Rigid Body or Skeleton exists in a Take, Solved data can be recorded. From the Assets pane, right-click one or more asset and select Solve from the context menu to calculate the solved data. To delete, simply click Remove Solve.
Deleting labels for a single _Take_
When no frame range is selected, it will unlabel all markers from all Takes. When a frame range is selected from the Timeline Editor, this will unlabel markers in the selected ranges only.
Deleting labels for multiple _Takes_
Even when a frame range is selected from the timeline, it will unlabel all markers from all frame ranges of the selected Takes.
During process, a calibration square is used to define global coordinate axes as well as the ground plane for the capture volume. Each calibration square has different vertical offset value. When defining the ground plane, Motive will recognize the square and ask user whether to change the value to the matching offset.
Square Type | Descriptions |
---|
For Motive 1.7 or higher, Right-Handed Coordinate System is used as the standard, across internal and exported formats and data streams. As a result, Motive 1.7 now interprets the L-Frame differently than previous releases:
Recommended | Minimum |
---|---|
License | Motive Edit | Motive Edit Unlimited | Motive Tracker | Motive Body | Motive Body Unlimited |
---|---|---|---|---|---|
When needed, an additional Viewer pane can be opened under the View tab or by clicking the icon on the main toolbar.
Function | Default Control |
---|---|
Switching to Live Mode in Motive using the control deck.
Right-click and drag on a graph to free-form zoom in and out on both vertical and horizontal axis. If the Autoscale Graph is enabled, the vertical axis range will be fixed according to the max and min value of the plotted data.\
The Application Settings can be accessed under the Edit tab or by clicking the icon on the main toolbar.
All markers need to be placed at respective anatomical locations of a selected Skeleton as shown in the . Skeleton markers can be divided into two categories: markers that are placed along joint axes (joint markers) and markers that are placed on body segments (segment markers).
Segment markers are markers that are placed on Skeleton body segments, but not around a joint. For best tracking results, each segment marker placement must be incongruent to an associated segment on the opposite side of the Skeleton (e.g. left thigh and right thigh). Also, segment markers must be placed asymmetrically within each segment for the best tracking results. This helps the Skeleton solve to thoroughly distinguish, left-side and right-side of the corresponding Skeleton segments throughout the capture. This asymmetrical placement is also emphasized in the avatars shown in the Builder pane. Segment markers that can be slightly moved to different places on the same segment is highlighted on the 3D avatar in the Skeleton creation window on the .\
See also:
When using the biomechanics Marker Sets, markers must be placed precisely with extra care because these placements directly relate to coordinate system definition of each respective segment; thus, affecting the resulting biomechanical analysis. The markers need to be placed on the skin for direct representation of the subject’s movement. Mocap suits are not suitable for biomechanic applications. While the basic marker placement must follow the avatar in the Builder pane, additional details on the accurate placements can be found on the following page: .
From the Skeleton creation options on the , select a Skeleton Marker Set template from the Template drop-down menu. This will bring up a Skeleton avatar displaying where the markers need to be placed on the subject.
Refer to the avatar and place the markers on the subject accordingly. For accurate placements, ask the subject to stand in the calibration pose while placing the markers. It is important that these markers get placed at the right spots on the subject's body for the best Skeleton tracking. Thus, extra attention is needed when placing the .
The magenta markers indicate the that can be placed at a slightly different position within the same segment.
Double-check the marker counts and their placements. It may be easier to use the in Motive to do this. The system should be tracking the attached markers at this point.
In the Builder pane, make sure the numbers under the Markers Needed and Markers Detected sections are matching. If the Skeleton markers are not automatically detected, manually select the Skeleton markers from the .
Select a desired set of marker labels under the Labels section. Here, you can just use the Default labels to assign labels that are defined by the Marker Set template. Or, you can also assign custom labels by loading previously prepared files in the label section.
Ask the subject to stand in the selected calibration pose. Here, standing in a proper calibration posture is important because the pose of the created Skeleton will be calibrated from it. For more details, read the section.
If you are creating a Skeleton in the post-processing of captured data, you will have to the Take to see the Skeleton modeled and tracked in Motive.
Skeleton markersets for VR applications have slightly different setup steps. See:
By configuring , you can modify the display settings as well as Skeleton creation pose settings for Skeleton assets. For newly created Skeletons, default Skeleton creation properties are configured under the pane. Properties of existing, or recorded, Skeleton assets are configured under the while the respective Skeletons are selected in Motive.
The A-pose is another type of calibration pose that is used to create Skeletons. Set the Skeleton Create Pose setting to the A-pose you wish to calibrate with. This pose is especially beneficial for subjects who have restrictions in lifting the arm. Unlike the T-pose, arms are abducted at approximately 40 degrees from the midline of the body, creating an A-shape. There are three different types of A-pose: Palms down, palms forward, and elbows bent.
After creating a Skeleton from the , calibration markers need to be removed. First, detach the calibration markers from the subject. Then, in Motive, right-click on the Skeleton in the perspective view to access the context menu and click Skeleton → Remove Calibration Markers. Check the to make sure that the Skeleton no longer expects markers in the corresponding medial positions.
To recalibrate Skeletons, select all of the associated Skeleton markers from the perspective view and click Recalibrate From Markers which can be found in the Skeleton context menu from either the or the . When using this feature, select a Skeleton and the markers that are related to the corresponding asset.
Skeleton marker colors and marker sticks can be viewed in the pane. They provide color schemes for clearer identification of Skeleton segments and individual marker labels from the perspective viewport. To make them visible, enable the Marker Sticks and Marker Colors under the visual aids in the pane. A default color scheme is assigned when creating a Skeleton asset. To modify marker colors and labels, you can use the .
Constraints basically store information of marker labels, colors, and marker sticks which can be modified, exported and re-imported as needed. For more information on doing this, please refer to the page.
The marker colors and sticks are featured only in Motive 1.10 and above, and skeletons created using Motive versions before 1.10 will not include the colors and sticks. For the Takes recorded before 1.10, the skeleton assets will need to be updated from the by right-clicking onto an asset and selecting Update Markers. The Update Markers feature will apply the default XML template to skeleton skeleton assets.
When adding, or removing, markers in the Edit mode, the Take needs to be again to re-label the Skeleton markers.
You can add or remove from a Rigid Body or a Skeleton using the Builder pane. This is basically adding or removing markers to the existing Rigid Body and/or Skeleton definition. Follow the below steps to add or remove markers:
Access the Modify tab on the .
When you add extra markers to Skeletons, the markers will be labeled as Skeleton_CustomMarker#. You can use the to change the label as needed.
Enable selection of Asset Model Markers from the visual aids option in .
Access the Modify tab on the .
Select the Skeleton segment that you wish to modify and select the associated that you wish to dissociate.
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
There are two ways of obtaining Skeleton joint angles. Rough representations of joint angles can be obtained directly from Motive, but the most accurate representations of joint angles can be obtained by pipelining the tracking data into a third-party biomechanics analysis and visualization software (e.g. or ).
Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis. A rough representation of joint angles can be obtained by either exporting or streaming the Skeleton Rigid Body tracking data. When exporting the tracking data into CSV, set the export setting to Local to obtain bone segment position and orientation values in respect to its parental segment, roughly representing the joint angles by comparing two hierarchical coordinate systems. When streaming the data, set to true in the streaming settings to get relative joint angles.
Each Skeleton asset has its marker templates stored in an XML file. By exporting, customizing, and importing the constraint XML files, a Skeleton Marker Set can be modified. Specifically, customizing the XML files will allow you to modify Skeleton marker labels, marker colors, and marker sticks within a Skeleton asset. For detailed instructions on modifying Skeleton XML files, read through page.
To export a Skeleton XML file, right-click on a Skeleton asset under the Assets pane and use the feature to export corresponding Skeleton marker XML file.
You can import marker XML file under the Labels section of the when first creating a new Skeleton. To import a constraints XML file on an existing Skeleton, right-click on a Skeleton asset under the Assets pane and click Import Constraints.
Tip: Prime series cameras will illuminate in blue when in live mode, in green when recording, and turned-off in edit mode. See more at .
Recording in Motive is triggered from the when in the Live mode, and the recorded data
In Motive, capture recording is controlled from the . In the Live mode, new Take** name** can be assigned in the name box or you can just simply start the recording and let Motive automatically generate new names on the fly. You can also create empty Takes in the Data Management pane for a better organization. To start the capture, select Live mode and click the recording button (red). In the control deck, record time and frames are displayed in (Hour:Minute:Second:Frames).
In Motive, all of the recorded capture files are managed through the . Each capture will be saved in a Take (TAK) file, which can be played back in the Edit mode later. Related Take files can be grouped within session folders. Simply create a new folder in the desired directory and load the folder onto the . Currently selected session folder is indicated with the flag symbol (), and all newly recorded Takes will be saved in this folder.
If the capture was unsuccessful, simply record the same Take again and another one will be recorded with a incremented suffix added at the end of the given Take name (e.g. walk_001, walk_002, walk_003). The suffix format is defined in the .
When a capture is first recorded, both 2D data and real-time reconstructed 3D data is saved onto the Take. For more details on each data type, refer to the page.
Throughout capture, you might recognize that there are different types of markers that appear in the . In order to correctly interpret the tracking data, it is important to understand the differences between these markers. There are three different displayed marker types: markers, Rigid Body markers, and bone (or Skeleton) markers.
Colors of the unlabeled markers can be changed from the .
Read through the page for more information on marker labels.
For instructions on creating a measurement probe, please refer to page. You can purchase our probe or create your own. All you need is 4 markers with a static relationship to a projected tip.
Use the created measurement probe to collect that outlines the silhouette of your object. Mark all of the corners and other key features on the object.
After 3D data points have been generated using the probe, attach your game geometry (obj file) to the Rigid Body by turning on the property and importing the geometry under property.
Next step is to translate the 3D model so that the attached model aligns with the silhouette sample that we collected in Step 3. The model can be easily translated and rotated using the . Move, rotate, and scale the asset unit it is aligned with the silhouette.
For accurate alignment, it will be easier to decrease the size of the marker visual. This can be changed from the setting under the application settings panel.
After you have translated, rotated, and scaled the pivot point of the Rigid Body to align the attached 3D model with the sampled data points, the transformation values will be shown under the property.
Copy and paste this transformation parameter onto the Rigid Body location and orientation options under the Edit tab in the . This will translate the pivot point of the Rigid Body in Motive, and align it with the pivot point of the 3D model.
Alternatively, if probe method is not applicable, you can also switch one of the cameras into grayscale view, right click on the camera in the Cameras view and select Make Reference. This will create a Rigid Body overlay in the to align the Rigid Body pivot using the similar approach as above.
Images in recorded 2D data depend on the , also called the video type, of each camera that was selected at the time of the capture. Cameras that were set to reference modes (MJPEG grayscale images) record reference videos, and cameras that were set to tracking modes (object, precision, segment) record 2D object images which can be used in the reconstruction process. The latter 2D object data contains information on x and y centroid positions of the captured reflections as well as their corresponding sizes (in pixels) and roundness, as shown in the below images.
Using the 2D object data along with the camera calibration information, 3D data is computed. Extraneous reflections that fail to satisfy the 2D object filter parameters (defined under ) get filtered out, and only the remaining reflections are processed. The process of converting 2D centroid locations into 3D coordinates is called Reconstruction, which will be covered in the later section of this page.
3D data can be reconstructed either in real-time or in post-capture. For real-time capture, Motive processes captured 2D images on a per-frame basis and streams the 3D data into external pipelines with extremely low processing latency. For recorded captures, the saved 2D data can be used to create a fresh set of 3D data through , and any existing 3D data will be overwritten with the newly reconstructed data.
Contains 2D frames, or 2D object information captured by each camera in a system. 2D data can be monitored from the pane.
3D data contains 3D coordinates of reconstructed markers. 3D markers get reconstructed from 2D data and shows up the perspective view. Each of their trajectories can be monitored in the . In recorded 3D data, marker labels can be assigned to reconstructed markers either through the process using asset definitions or by manually assigning it. From these labeled markers, Motive solves the position and orientation of Rigid Bodies and Skeletons.
Recorded 3D data is editable. Each frame of the trajectory can be deleted or modified. The post-processing can be used to interpolate the missing trajectory gaps or apply the smoothing, and the can be used to assign or reassign the marker labels.
Lastly, from a recorded 3D data, its tracking data can be into various file formats — CSV, C3D, FBX, and more.
can be used to fill the trajectory gaps.
Solved data is positional and rotational, 6 degrees of freedom (DoF), tracking data of and . This data is stored when a TAK is first captured, and it can be removed or recalculated from recorded 3D data. Solved data is fully calculated on all of the recorded frames and if it exists, the real-time Rigid Body and Skeleton solvers do not run during playback. This reduces the amount of processing needed for playback and improves performance.
In the , right-click on a selected asset(s) and click Record Solved Data. Assets that contain solved data will be indicated under the solved column.
In the , right-click on a Take and click Solve All Assets to produce solved data for all of the associated assets. Takes that contain solved data will be indicated under the solved column.
Recorded , audio data, and reference videos can be deleted from a Take file. To do this, open the , right-click on a recorded Take(s), and click the Delete 2D Data from the context menu. Then, a dialogue window will pop-up, asking which types of data to delete. After removing the data, a backup file will be archived into a separate folder.
Deleting 2D data will significantly reduce the size of the Take file. You may want to delete recorded 2D data when there is already a final version of reconstructed 3D data recorded in a Take and the 2D data is no longer needed. However, be aware that deleting removes the most fundamental data from the Take file. After 2D data has been deleted, the action cannot be reverted, and without 2D data, 3D data cannot be again.
Recorded 3D data can be deleted from the context menu in the . To delete 3D data, right-click on selected Takes and click Delete 3D data, and all reconstructed 3D information will be removed from the Take. When you delete the 3D data, all edits and labeling will be deleted as well. Again, a new 3D data can always be reacquired by reconstructing and auto-labeling the Take from 2D data.
When multiple Takes are selected from the , deleting 3D data will remove 3D data from all of the selected Takes. This will remove 3D data from the entire frame ranges.
Assigned marker labels can be deleted from the context menu in the . The Delete Marker Labels feature removes all marker labels from the 3D data of selected Takes. All markers will become unlabeled.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better, running at 3 GHz or greater
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers
OS: Windows 10, 11 (64-bit)
CPU: Intel i7, 3 GHz
RAM: 4GB of memory
Live Rigid Bodies
0
0
Unlimited
Unlimited
Unlimited
Live Skeletons
0
0
0
Up to 3
Unlimited
Edit Rigid Bodies
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Edit Skeletons
Up to 3
Unlimited
0
Up to 3
Unlimited
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle Selection in View
CTRL + left mouse click
Frame Rate | Number of samples included per every second of exported data. |
Start Frame |
End Frame |
Scale | Apply scaling to the exported tracking data. |
Units | Sets the length units to use for exported data. |
Axis Convention | Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor. |
X Axis Y Axis Z Axis | Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set. |
Use Zero Based Frame Index | C3D specification defines first frame as index 1. Some applications import C3D files with first frame starting at index 0. Setting this option to true will add a start frame parameter with value zero in the data header. |
Export Unlabeled Markers | Includes unlabeled marker data in the exported C3D file. When set to False, the file will contain data for only labeled markers. |
Export Finger Tip Markers | Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking (e.g. Baseline + 11 Additional Markers + Fingers (54)) |
Use Timecode | Includes timecode. |
Rename Unlabeled As _000X | Unlabeled markers will have incrementing labels with numbers _000#. |
Marker Name Syntax | Choose whether the marker naming syntax uses ":" or "_" as the name separator. The name separator will be used to separate the asset name and the corresponding marker name in the exported data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel or MarkerLabel). |
The Edit Tools in Motive enables users to post-process tracking errors from recorded capture data. There are multiple editing methods available, and you need to clearly understand them in order to properly fix errors in captured trajectories. Tracking errors are sometimes inevitable due to the nature of marker-based motion capture systems. Thus, understanding the functionality of the editing tools is essential. Before getting into details, note that the post-editing of the motion capture data often takes a lot of time and effort. All captured frames must be examined precisely and corrections must be made for each error discovered. Furthermore, some of the editing tools implement mathematical modifications to marker trajectories, and these tools may introduce discrepancies if misused. For these reasons, we recommend optimizing the capture setup so that tracking errors are prevented in the first place.
Common tracking errors include marker occlusions and labeling errors. Labeling errors include unlabeled markers, mislabeled markers, and label swaps. Fortunately, label errors can be corrected simply by reassigning proper labels to markers. Markers may be hindered from camera views during capture. In this case, the markers will not be reconstructed into 3D space and introduce a gap in the trajectory, which are referred to as marker occlusions. Marker occlusions are critical because the trajectory data is not collected at all, and retaking the capture could be necessary if the missing marker is significant to the application. For these occluded markers, Edit Tools also provide interpolation pipelines to model the occluded trajectory using other captured data points. Read through this page to understand each of data editing methods in detail.
Steps in Editing
General Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
In some cases, you may wish to delete 3D data for certain markers in a Take file. For example, you may wish to delete corrupt 3D reconstructions or trim out erroneous movements from the 3D data to improve the data quality. In the Edit mode, reconstructed 3D markers can be deleted for selected range of frames. To delete a 3D marker, first select 3D markers that you wish to delete, and press the Delete key, and they will be completely erased from the 3D data. If you wish to delete 3D markers for a specific frame range, open the Graph Pane and select the frame ranges that you wish to delete the markers from, and press the Delete key. The 3D trajectory for the selected markers will be erased for the highlighted frame range.
Note: Deleted 3D data can be recovered by reconstructing and auto-labeling new 3D data from recorded 2D data.
The trimming feature can be used to crop a specific frame range from a Take. For each round of trim, a copied version of the Take will be automatically achieved and backed up into a separate session folder.
Steps for trimming a Take
1) Determine a frame range that you wish to extract.
2) Set the working range (also called as the view range) on the Graph View pane. All other frames outside of this range will be trimmed out. You can set the working range through the following approaches:
Specify the starting frame and ending frame from the navigation bar on the Graph Pane.
3) After zooming into the desired frame range, click Edit > Trim Current Range to trim out the unnecessary frames.
4) A dialog box will pop up asking to confirm the data removal. If you wish to reset the frame numbers upon trimming the take, select the corresponding check box on the pop-up dialog.
The first step in the post-processing is to check for labeling errors. Labels can be lost or mislabeled to irrelevant markers either momentarily or entirely during capture. Especially when the marker placement is not optimized or when there are extraneous reflections, labeling errors may occur. As mentioned in other pages, marker labels are vital when tracking a set of markers, because each label affects how the overall set is represented. Examine through the recorded capture and spot the labeling errors from the perspective view, or by checking the trajectories on the Graph pane for suspicious markers. Use the Labels pane or the Tracks View mode from the Graph pane to monitor unlabeled markers in the Take.
When a marker is unlabeled momentarily, the color of tracked marker switches between white (labeled) and orange (unlabeled) by the default color setting. Mislabeled markers may have large gaps and result in a crooked model and trajectory spikes. First, explore captured frames and find where the label has been misplaced. As long as the target markers are visible, this error can easily be fixed by reassigning the correct labels. Note that this method is preferred over editing tools because it conserves the actual data and avoids approximation.
Read more about labeling markers from the Labeling page.
The Edit Tools provide functionality to modify and clean-up 3D trajectory data after a capture has been taken. multiple post-processing methods are featured in the Edit Tools for different purposes: Trim Tails, Fill Gaps, Smooth, and Swap Fix. The Trim Tails method is used to remove data points in few frames before and after a gap. The Fill Gaps method calculates the missing marker trajectory using interpolation methods. The Smoothing method filters out unwanted noise in the trajectory signal. Finally, the Swapping method switches marker labels for two selected markers. Remember that modifying data using Edit Tools changes the raw trajectories, and an overuse of Edit Tools is not recommended. Read through each method and familiarize yourself with the Editing Tools. Note that you can undo and redo all changes made using Edit Tools.
Frame Range: If you have a certain frame range selected from the timeline, data edits will be applied to the selected range only.
The Tails method trims, or removes, a few data points before and after a gap. Whenever there is a gap in a marker trajectory, slight tracking distortions may be present on each end. For this reason, it is usually beneficial to trim off a small segment (~3 frames) of data. Also, if these distortions are ignored, they may interfere with other editing tools which rely on existing data points. Before trimming trajectory tails, check all gaps to see if the tracking data is distorted. After all, it is better to preserve the raw tracking data as long as they are relevant. Set the appropriate trim settings, and trim out the trajectory on selected or all frame. Each gap must satisfy the gap size threshold value for it to be considered for trimming. Each trajectory segment also needs to satisfy the minimum segment size, otherwise, it will be considered as a gap. Finally, the Trim Size value will determine how many leading and trailing trajectory frames are removed from a gap.
Smart Trim
The Smart Trim feature automatically sets the trimming size based on trajectory spikes near the existing gap. It is often not needed to delete numerous data points before or after a gap, but there are some cases where it's useful to delete more data points than others. This feature determines whether each end of the gap is suspicious with errors, and deletes an appropriate number of frames accordingly. Smart Trim feature will not trim more frames than the defined Leading and Trailing value.
Gap filling is the primary method in the data editing pipeline, and this feature is used to remodel the trajectory gaps with interpolated marker positions. This is used to accommodate the occluded markers in the capture. This function runs mathematical modeling to interpolate the occluded marker positions from either the existing trajectories or other markers in the asset. Note that interpolating a large gap is not recommended because approximating too many data points may lead to data inaccuracy.
New to Motive 3.0; For Skeletons and Rigid Bodies only Model Asset Markers can be used to fill individual frames where the marker has been occluded. Model Asset markers must be first enabled on the Properties Pane when the desired asset is selected and then they must be enabled for selection in the Viewport. Now when frames are encountered where the marker is lost from camera view, select the associated Model Asset Marker in the 3D view; right click for the context menu and select 'Set Key'.
First of all, set the Max. Gap Size value and define the maximum frame length for an occlusion to be considered as a gap. If a gap size has a longer frame length, it will not be affected by the filling mechanism. Set a reasonable maximum gap size for the capture after looking through the occluded trajectories. In order to quickly navigate through the trajectory graphs on the Graph Pane for missing data, use the Find Gap features (Find Previous and Find Next) and automatically select a gap frame region so the data could be interpolated. Then, apply the Fill Gaps feature while the gap region is selected. Various interpolation options are available in the setting including Constant, Linear, Cubic, Pattern-based, and Model-based.
There are four different interpolation options offered in Edit Tools: constant, linear, cubic and pattern-based. First three interpolation methods (constant, linear, and cubic) look at the single marker trajectory and attempt to estimate the marker position using the data points before and after the gap. In other words, they attempt to model the gap via applying different degrees of polynomial interpolations. The other two interpolation options (pattern-based and model-based) reference visible markers and models to the estimate occluded marker position.
Constant
Applies zero-degree approximation, assumes that the marker position is stationary and remains the same until the next corresponding label is found.
Linear
Applies first-degree approximation, assuming that the motion is linear, to fill the missing data. Only use this when you are sure that the marker is moving at linear motion.
Cubic
Applies third-degree polynomial interpolation, cubic spline, to fill the missing data in the trajectory.
Pattern based
This refers to trajectories of selected reference markers and assumes the target marker moves along in a similar pattern. The Fill Target marker is specified from the drop-down menu under the Fill Gaps tool. When multiple markers are selected, a Rigid Body relationship is established among them, and the relationship is used to fill the trajectory gaps of the selected Fill Target marker as if they were all attached to a same Rigid Body. The following list is the general workflow for using the Pattern Based interpolation:
Select both reference markers and the target marker to fill.
Examine the trajectory of the target marker from the Graph Pane: Size, range, and a number of gaps.
Set an appropriate Max. Gap Size limit.
Select the Pattern Based interpolation option.
Specify the Fill Target marker in the drop-down menu.
When interpolating for only a specific section of the capture, select the range of frames from Graph pane.
Click the Fill Selected/Fill All/Fill Everything.
Model based
This interpolation is used for filling markers gaps of an asset (skeleton segments or rigid bodies). Model based interpolation refers to the model and corresponding expected marker positions for estimating the trajectory. When using this option on a skeleton asset, the other skeleton markers and related segments determine a reliable location of the marker during the occluded gap. When using this option, simply select a gapped marker within a model, configure the Max. Gap Size value, and apply the interpolation in the desired frame range.
The smoothing feature applies a noise filter (low-pass Butterworth, 4th degree) to trajectory data, and this modifies the marker trajectory smoother. This is a bi-directional filter that does not introduce phase shifts. Using this tool, any vibrating or fluttering movements can be filtered out. First of all, set the cutoff frequency for the filter and define how strongly your data will be smoothed.
When the cutoff frequency is set high, only high-frequency signals are filtered. When the cutoff frequency is low, trajectory signals at a lower frequency range will also be filtered. In other words, a low cutoff frequency setting will smooth most of the transitioning trajectories, whereas high cutoff frequency setting will smooth only the fluttering trajectories.
High-frequency data are present during sharp transitions, and this can also be introduced by signal noise. Commonly used ranges for Filter Cutoff Frequency are between 7 Hz to 12 Hz, but you may want to adjust the value higher for fast and sharp motions to avoid softening motion transitions need to stay sharp.
In some cases, marker labels may be swapped during capture. Swapped labels can result in erratic orientation changes, or crooked skeletons, but they can be corrected by re-labeling the markers. The Swap Fix feature in the Edit Tools can be used to correct obvious swaps that persist through the capture. Select two markers that have their labels swapped, and select the frame range that you wish to edit.
Find Previous and Find Next buttons allow you to navigate to the frame where their position have been changed. If a frame range is not specified, the change will be applied from current frame forward. Finally, switch the marker labels by clicking on the Apply Swap button. As long as both labels are present in the frame and only correction needed is to change the labels, the Swap Fix tool could be utilized to make corrections.
CS-200:
|
CS-400: Used for general for common mocap applications. Contains knobs for adjusting the balance as well as slots for aligning with a force plate.
|
Legacy L-frame square: Legacy calibration square designed before changing to the Right-hand coordinate system.
|
Various types of files, including the tracking data, can be exported out from Motive. This page provides information on what file formats can be exported from Motive and instructions on how to export them.
Once captures have been recorded into Take files and the corresponding 3D data have been reconstructed, tracking data can be exported from Motive in various file formats.
Exporting Rigidbody Tracking Data
If the recorded Take includes Rigid Body trackable assets, make sure all of the Rigid Bodies are Solved prior to exporting. The solved data will contain positions and orientations of each Rigid Body.
In the export dialog window, the frame rate, the measurement scale, and the frame range of exported data can be configured. Additional export settings are available for each export file formats. Read through below pages for details on export options for each file format:
Exporting a Single Take
Step 1. Open and select a Take to export from the Data pane. The selected Take must contain reconstructed 3D data.
Step 2. Under the File tab on the command bar, click File → Export Tracking Data. This can also be done by right-clicking on a selected Take from the Data pane and clicking Export Tracking Data from the context menu.
Step 3. On the export dialogue window, select a file format and configure the corresponding export settings.
To export the entire frame range, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export a specific frame range, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 4. Click Save.
Working Range:
The working range (also called the playback range) is both the view range and the playback range of a corresponding Take in Edit mode. Only within the working frame range will recorded tracking data be played back and shown on the graphs. This range can also be used to output specific frame ranges when exporting tracking data from Motive.
The working range can be set from the following places:
In the navigation bar of the Graph View pane, you can drag the handles on the scrubber to set the working range.
You can also use the navigation controls on the Graph View pane to zoom in or zoom out on the frame ranges to set the working range. See: Graph View pane page.
Start and end frames of a working range can also be set from the Control Deck when in the Edit mode.
Exporting Multiple Takes
Step 1. Under the Data pane, shift + select all the Takes that you wish to export.
Step 2. Right-click on the selected Takes and click Export Tracking Data from the context menu.
Step 3. An export dialogue window will show up for batch exporting tracking data.
Step 4. Select the desired output format and configure the corresponding export settings.
Step 5. Select frame ranges to export under the Start Frame and the End Frame settings. You can either export entire frame ranges or specified frame ranges on all of the Takes. When exporting specific ranges, desired working ranges must be set for each respective Takes.
To export entire frame ranges, set Start Frame and End Frame to Take First Frame and Take Last Frame.
To export specific frame ranges, set Start Frame and End Frame to Start of Working Range and End of Working Range.
Step 6. Click Save.
Motive Batch Processor:
Exporting multiple Take files with specific options can also be done through a Motive Batch Processor script. For example, refer to FBXExporterScript.cs script found in the MotiveBatchProcessor folder.
Motive exports reconstructed 3D tracking data in various file formats and exported files can be imported into other pipelines to further utilize capture data. Available export formats include CSV, C3D, FBX, BVH, and TRC. Depending on which options are enabled, exported data may include reconstructed marker data, 6 Degrees of Freedom (6 DoF) Rigid Body data, or Skeleton data. The following chart shows what data types are available in different export formats:
CSV and C3D exports are supported in both Motive Tracker and Motive Body licenses. FBX, BVH, and TRC exports are only supported in Motive Body.
A calibration definition of a selected take can be exported from the Export Camera Calibration under the File tab. Exported calibration (CAL) files contain camera positions and orientations in 3D space, and they can be imported in different sessions to quickly load the calibration as long as the camera setup is maintained.
Read more about calibration files under the Calibration page.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
When an asset definition is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new asset in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to the File menu and select Export Assets to export all of the assets in the Live-mode or in the current TAK file(s). You can also use File → Export Profile to export other software settings including the assets.
Recorded NI-DAQ analog channel data can be exported into C3D and CSV files along with the mocap tracking data. Follow the tracking data export steps outlined above and any analog data that exists in the TAK will also be exported.
C3D Export: Both mocap data and the analog data will be exported onto a same C3D file. Please note that all of the analog data within the exported C3D files will be logged at the same sampling frequency. If any of the devices are captured at different rates, Motive will automatically resample all of the analog devices to match the sampling rate of the fastest device. More on C3D files: https://www.c3d.org/
CSV Export: When exporting tracking data into CSV, additional CSV files will be exported for each of the NI-DAQ devices in a Take. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. The voltage amplitude of each analog channel will be listed. Also, mocap frame rate to device sampling ratio is included since analog data is usually sampled at higher sampling rates.
Note
The coordinate system used in Motive (y-up right-handed) may be different from the convention used in the biomechanics analysis software.
Common Conventions
Since Motive uses a different coordinate system than the system used in common biomechanics applications, it is necessary to modify the coordinate axis to a compatible convention in the C3D exporter settings. For biomechanics applications using z-up right-handed convention (e.g. Visual3D), the following changes must be made under the custom axis.
X axis in Motive should be configured to positive X
Y axis in Motive should be configured to negative Z
Z axis in Motive should be configured to positive Y.
This will convert the coordinate axis of the exported data so that the x-axis represents the anteroposterior axis (left/right), the y-axis represents the mediolateral axis (front/back), and the z-axis represents the longitudinal axis (up/down).
When there is an MJPEG reference camera in a Take, its recorded video can be exported into an AVI file or into a sequence of JPEG files. The Export Video option is located under the File tab or you can also right-click on a TAK file from the Data pane and export from there. At the bottom of the export dialog, the frame rate of the exported AVI file can be set to a full-frame rate or down-sampled to half, quarter, 1/8, or 1/16 ratio framerate. You can also adjust the playback speed to export a video with a slower or faster playback speed. The captured reference videos can be exported into AVI files using either H.264 or MJPEG compression format. The H.264 format will allow faster export of the recorded videos and is recommended. Read more about recording reference videos on Data Recording page.
Reference Video Type: Only compressed MJPEG reference videos can be recorded and exported from Motive. Export for raw grayscale videos is not supported.
Media Player: The exported videos may not be playable on Windows Media Player, please use a more robust media player (e.g. VLC) to play the exported video files.
When a recorded capture contains audio data, an audio file can be exported through the Export Audio option that appears when right-clicking on a Take from the Data pane.
Skeletal marker labels for Skeleton assets can be exported as XML files (example shown below) from the Data pane. The XML files can be imported again to use the stored marker labels when creating new Skeletons.
For more information on Skeleton XML files, read through the Skeleton Tracking page.
Sample Skeleton Label XML File
In Motive, Rigid Body assets are used for tracking rigid, unmalleable, objects. A set of markers get securely attached to tracked objects, and respective placement information gets used to identify the object and report 6 Degree of Freedom (6DoF) data. Thus, it's important that the distances between placed markers stay the same throughout the range of motion. Either passive retro-reflective markers or active LED markers can be used to define and track a Rigid Body. This page details instructions on how to create rigid bodies in Motive and other useful features associated with the assets.
A Rigid Body in Motive is a collection of three or more markers on an object that are interconnected to each other with an assumption that the tracked object is unmalleable. More specifically, it assumes that the spatial relationship among the attached markers remains unchanged and the marker-to-marker distance does not deviate beyond the allowable deflection tolerance defined under the corresponding Rigid Body properties. Otherwise, involved markers may become unlabeled. Cover any reflective surfaces on the Rigid Body with non-reflective materials, and attach the markers on the exterior of the Rigid Body where cameras can easily capture them.
Tip: If you wish to get more accurate 3D orientation data (pitch, roll, and yaw) of a Rigid Body, it is beneficial to spread markers as far as you can within the same Rigid Body. By placing the markers this way, any slight deviation in the orientation will be reflected from small changes in the position.
In a 3D space, a minimum of three coordinates is required for defining a plane using vector relationships; likewise, at least three markers are required to define a Rigid Body in Motive. Whenever possible, it is best to use 4+ markers to create a Rigid Body. Additional markers provide more 3D coordinates for computing positions and orientations of a rigid body, making overall tracking more stable and less vulnerable to marker occlusions. When any of markers are occluded, Motive can reference to other visible markers to solve for the missing data and compute position and orientation of the rigid body.
However, placing too many markers on one Rigid Body is not recommended. When too many markers are placed in close vicinity, markers may overlap on the camera view, and Motive may not resolve individual reflections. This may increase the likelihood of label-swaps during capture. Securely place a sufficient number of markers (usually less than 10) just enough to cover the main frame of the Rigid Body.
Tip: The recommended number of markers per a Rigid Body is 4 ~ 12 markers. Rigid Body cannot be created with more than 20 markers in Motive.
Within a Rigid Body asset, its markers should be placed asymmetrically because this provides a clear distinction of orientations. Avoid placing the markers in symmetrical shapes such as squares, isosceles, or equilateral triangles. Symmetrical arrangements make asset identification difficult, and they may cause the Rigid Body assets to flip during capture.
When tracking multiple objects using passive markers, it is beneficial to create unique Rigid Body assets in Motive. Specifically, you need to place retroreflective markers in a distinctive arrangement between each object, and it will allow Motive to more clearly identify the markers on each Rigid Body throughout capture. In other words, their unique, non-congruent, arrangements work as distinctive identification flags among multiple assets in Motive. This not only reduces processing loads for the Rigid Body solver, but it also improves the tracking stability. Not having unique Rigid Bodies could lead to labeling errors especially when tracking several assets with similar size and shape.
Note for Active Marker Users
If you are using OptiTrack active markers for tracking multiple Rigid Bodies, it is not required to have unique marker placements. Through the active labeling protocol, active markers can be labeled individually and multiple rigid bodies can be distinguished through uniquely assigned marker labels. Please read through Active Marker Tracking page for more information.
What Makes Rigid Bodies Unique?
The key idea of creating unique Rigid Body is to avoid geometrical congruency within multiple Rigid Bodies in Motive.
Unique Marker Arrangement. Each Rigid Body must have a unique, non-congruent, marker placement creating a unique shape when the markers are interconnected.
Unique Marker-to-Marker Distances. When tracking several objects, introducing unique shapes could be difficult. Another solution is to vary Marker-to-marker distances. This will create similar shapes with varying sizes, and make them distinctive from the others.
Unique Marker Counts Adding extra markers is another method of introducing the uniqueness. Extra markers will not only make the Rigid Bodies more distinctive, but they will also provide more options for varying the arrangements to avoid the congruency.
What Happens When Rigid Bodies Are Not Unique?
Having multiple non-unique Rigid Bodies may lead to mislabeling errors. However, in Motive, non-unique Rigid Bodies can also be tracked fairly well as long as the non-unique Rigid Bodies are continuously tracked throughout capture. Motive can refer to the trajectory history to identify and associate corresponding Rigid Bodies within different frames. In order to track non-unique Rigid Bodies, you must make sure the Properties → General Settings → Unique setting in Rigid Body Properties of the assets are set to False.
Even though it is possible to track non-unique Rigid Bodies, it is strongly recommended to make each asset unique. Tracking of multiple congruent Rigid Bodies could be lost during capture either by occlusion or by stepping outside of the capture volume. Also, when two non-unique Rigid Bodies are positioned in vicinity and overlap in the scene, their marker labels may get swapped. If this happens, additional efforts will be required for correcting the labels in post-processing of the data.
Multiple Rigid Bodies Tracking
Depending on the object, there could be limitations on marker placements and number of variations of unique placements that could be achieved. The following list provides sample methods for varying unique arrangements when tracking multiple Rigid Bodies.
1. Create Distinctive 2D Arrangements. Create distinctive, non-congruent, marker arrangements as the starting point for producing multiple variations, as shown in the examples above.
2. Vary heights. Use marker bases or posts, with different heights to introduce variations in elevation to create additional unique arrangements.
3. Vary Maximum Marker to Marker Distance. Increase or decrease the overall size of the marker arrangements.
4. Add Two (or more) Markers Lastly, if an additional variation is needed, add extra markers to introduce the uniqueness. We recommended adding at least two extra markers in case any of them is occluded.
A set of markers attached to a rigid object can be grouped and auto-labeled as a Rigid Body. This Rigid Body definition can be utilized in multiple takes to continuously auto-label the same Rigid Body markers. Motive recognizes the unique spatial relationship in the marker arrangement and automatically labels each marker to track the Rigid Body. At least three coordinates are required to define a plane in 3D space, and therefore, a minimum of three markers are essential for creating a Rigid Body.
Step 1.
Select all associated Rigid Body markers in the 3D viewport.
Step 2.
On the Builder pane, confirm that the selected markers match the markers that you wish to define the Rigid Body from.
Step 3.
Click Create to define a Rigid Body asset from the selected markers.
You can also create a Rigid Body by doing the following actions while the markers are selected:
Perspective View (3D viewport): While the markers are selected, right-click on the perspective view to access the context menu. Under the Rigid Body section, click Create From Selected Markers.
Hotkey: While the markers are selected, use the create Rigid Body hotkey (Default: Ctrl +T).
Step 4.
Once the Rigid Body asset is created, the markers will be colored (labeled) and interconnected to each other. The newly created Rigid Body will be listed under the Assets pane.
Defining Assets in Edit mode:
If the Rigid Bodies, or skeletons, are created in the Edit mode, the corresponding Take needs to be auto-labeled. Only then, the Rigid Body markers will be labeled using the Rigid Body asset and positions and orientations will be computed for each frame. If the 3D data have not been labeled after edits on the recorded data, the asset may not be tracked.
Rigid Body properties consist of various configurations of Rigid Body assets in Motive, and they determine how Rigid Bodies are tracked and displayed in Motive. For more information on each property, read through the Properties: Rigid Body page.
Default Properties
When a Rigid Body is first created, default Rigid Body properties are applied to the newly created assets. The default creation properties are configured under the Assets section in the Application Settings panel.
Modifying Properties
Properties for existing Rigid Body assets can be changed from the Properties pane.
An existing rigid body can be modified by adding or removing markers using the context menu.
First select a rigid body from the Assets pane or by selecting the pivot point in the Perspective View.
Ctrl + left-click the markers that you wish to add/remove.
Left-click on the Perspective View pane to open the rigid body context menu.
Under Rigid Body, choose Add/Remove selected markers to/from rigid body.
If needed, right-click on the rigid body and select Reset Pivot to relocate the pivot point to the new center.
Multiple Rigid Bodies
When multiple rigid bodies are selected, context-menu applies only to the primary rigid body selection only. The primary rigid body is the last rigid body you selected, and its name will show up on the bottom-right corner of the 3D viewport.
Created rigid body definitions can be modified using the editing tools in the Builder pane or by using the steps covered in the following sections.
The pivot point of a Rigid Body is used to define both position and orientation. When a rigid body is created, its pivot point is be placed at its geometric center by default, and its orientation axis will be aligned with the global coordinate axis. To view the pivot point and the orientation in the 3D viewport, set the Bone Orientation to true under the display settings of a selected Rigid Body in the Properties pane.
As mentioned previously, the orientation axis of a Rigid Body, by default, gets aligned with the global axis when the Rigid Body was first created. After a Rigid Body is created, its orientation can be adjusted by editing the Rigid Body orientation using the Builder pane or by using the GIZMO tools as described in the next section.
There are situations where the desired pivot point location is not at the center of a Rigid Body. The location of a pivot point can be adjusted by assigning it to a marker or by translating along the Rigid Body axis (x,y,z). For most accurate pivot point location, attach a marker on the desired pivot location, set the pivot point to the marker, and apply the translation for precise adjustments. If you are adjusting the pivot point after the capture, in the Edit mode, the Take will need to be auto-labeled again to apply the changes.
Using the gizmo tools from the perspective view options to easily modify the position and orientation of Rigid Body pivot points. You can translate and rotate Rigid Body pivot, assign pivot to a specific marker, and/or assign pivot to a mid-point among selected markers.
Select Tool (Hotkey: Q): Select tool for normal operations.
Translate Tool (Hotkey: W): Translate tool for moving the Rigid Body pivot point.
Rotate Tool (Hotkey: E): Rotate tool for reorienting the Rigid Body coordinate axis.
Scale Tool (Hotkey: R): Scale tool for resizing the Rigid Body pivot point.
Read through the Gizmo tools page for detailed information.
To assign the pivot point to a marker, first select the pivot point in the Perspective View pane, and CTRL select the marker that you wish to assign to. Then right-click to open the context menu, and in the rigid body section, click Set Pivot Point to Selected Marker.
To translate the pivot point, access the Rigid Body editing tools in the Builder pane while the Rigid Body is selected. In the Location section, you can input the amount of translation (in mm) that you wish to apply. Note that the translation will be applied along the x/y/z of the Rigid Body orientation axis. Resetting the translation will position the pivot point at the geometric center of the Rigid Body according to its marker positions.
If you wish to reset the pivot point, simply open the Rigid Body context menu in the Perspective pane and click Reset Pivot. The location of the pivot point will be reset back to the center of the Rigid Body again.
This feature is useful when tracking a spherical object (e.g. ball). The Spherical Pivot Placement feature in the Builder pane will assume that all the Rigid Body markers are placed on the surface of a spherical object, and the pivot point will be calculated and re-positioned accordingly. To do this, select a Rigid Body, access Modify tab in the Builder pane, and click Apply from the Spherical Pivot Placement.
Rigid Body tracking data can be either outputted onto a separate file or streamed to client applications in real-time:
Captured 6 DoF Rigid Body data can be exported into CSV, FBX, or BVH files. See: Data Export
You can also use one of the streaming plugins or use NatNet client applications to receive tracking data in real-time. See: NatNet SDK
Assets can be exported into Motive user profile (.MOTIVE) file if it needs to be re-imported. The user profile is a text-readable file that can contain various configuration settings in Motive; including the asset definitions.
When the asset definition(s) is exported to a MOTIVE user profile, it stores marker arrangements calibrated in each asset, and they can be imported into different takes without creating a new one in Motive. Note that these files specifically store the spatial relationship of each marker, and therefore, only the identical marker arrangements will be recognized and defined with the imported asset.
To export the assets, go to Files tab → Export Assets to export all of the assets in the Live-mode or in the current TAK file. You can also use Files tab → Export Profile to export other software settings including the assets.
This feature is supported in Live Mode only.
The Rigid Body refinement tool improves the accuracy of Rigid Body calculation in Motive. When a Rigid Body asset is initially created, Motive references only a single frame for defining the Rigid Body definition. The Rigid Body refinement tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results. More specifically, this feature improves the calculation of expected marker locations of the Rigid Body as well as the position and orientation of the Rigid Body itself.
Steps
Select View from the toolbar at the top, open the Builder pane.
Select the Rigid Bodies from the Type dropdown menu.
In Live mode, select an existing Rigid Body asset that you wish to refine from the Assets pane.
Hold the physical selected Rigid Body at the center of the capture volume so that as many cameras as possible can clearly capture the markers on the Rigid Body.
Click Start Refine in the Builder pane.
Slowly rotate the Rigid Body to collect samples at different orientations until the progress bar is full.
Once all necessary samples are collected, the refinement results will be displayed.
Like in many other measurement systems, calibration is also essential for optical motion capture systems. During camera calibration, the system computes position and orientation of each camera and amounts of distortions in captured images, and they are used constructs a 3D capture volume in Motive. This is done by observing 2D images from multiple synchronized cameras and associating the position of known calibration markers from each camera through triangulation.
Please note that if there is any change in a camera setup over the course of capture, the system will need to be recalibrated to accommodate for changes. Moreover, even if setups are not altered, calibration accuracy may naturally deteriorate over time due to ambient factors, such as fluctuation in temperature and other environmental conditions. Thus, for accurate results, it is recommended to periodically calibrate the system.
Duo/Trio Tracking Bars: The Duo/Trio tracking bars are self-contained and pre-calibrated prior to shipment; therefore, user calibration is not required.
Prepare and optimize the capture volume for setting up a motion capture system.
Apply masks to ignore existing reflections in the camera view. Here, also make sure the calibration tools are hidden from the camera views.
Collect calibration samples through the wanding process.
Review the wanding result and apply calibration.
Set the ground plane to complete the system calibration.
By default, Motive will start up on the calibration layout containing necessary panes for the calibration process. This layout can also be accessed by clicking on a calibration layout from the top-right corner , or by using the Ctrl+1 hotkey.
System settings used for calibration should be kept unchanged. If camera settings are altered after the calibration, the system would potentially need to be recalibrated. To avoid such inconveniences, it is important to optimize both hardware and software setup before the calibration. First, cameras need to be appropriately placed and configured to fully cover the capture volume. Secondly, each camera must be mounted securely so that they remain stationary during capture. Lastly, Motive's camera settings used for calibration should ideally remain unchanged throughout the capture. Re-calibration will be required if there is any significant modifications to the settings that influence the data acquisition, such as camera settings, gain settings, and Filter Switcher settings. If these settings are modified, it is recommended the system be recalibrated.
All extraneous reflections or unnecessary markers are ideally removed from the capture volume before calibration. In fact, the system will refuse to calibrate if there are too many reflections other than the calibration wand present in the camera views. However, in certain situations, unwanted reflections or ambient interference could not be removed from the setup. In this case, these irrelevant reflections can be ignored via using the Masking Tool. This tool applies red masks over the extraneous reflections seen from the 2D camera view, and all of the pixels in the masked regions is entirely filtered out. This is very useful when blocking unwanted reflections that could not be removed from the setup. Use the masking tool to remove any extraneous reflections before proceeding to wanding.
You should be careful when using the masking features because masked pixels are completely filtered from the 2D data. In other words, the data in masked regions will not be collected for computing the 3D data, and excessive use of masking may result in data loss or frequent marker occlusions. Therefore, all removable reflective objects must be taken out or covered before the using the masking tool. After all reflections are removed or masked from the view, proceed onto the wanding process.
The wanding process is the core pipeline that samples calibration data into Motive. A calibration wand is waved in front of the cameras repeatedly, allowing all cameras to see the markers. Through this process, each camera captures sample frames in order to compute their respective position and orientation in the 3D space. There are a number of calibration wands suited for different capture applications.
Active Wanding:
Applying masks to camera views only applies to calibration wands with passive markers. Active calibration wands are capable of calibrating the capture volume while the LEDs of all the cameras are turned off. If the capture has a large amount reflective material that cannot be moved, this method highly recommended.
Under the OptiWands section, specify the wand that you will be using to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly. For example, if you are using CW-500 wand with markers on configuration A, use the 500mm setting. If you are using CW-250 wand, or CW-500 wand with configuration B, please use the 250mm setting.
Set the Calibration Type. If you are calibrating a new capture volume, choose Full Calibration.
Double check the calibration setting. Once confirmed, press Start Wanding to initiate the wanding process.
Start wanding. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Draw figure eight with the wand to collect samples at varying orientations, and cover as much space as possible for sufficient sampling. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples. Wanding trails will be shown in colors on the 2D view. A table displaying the status of the wanding process will show up in the Calibration pane to monitor the progress. For best results, wand the volume evenly and comprehensively throughout the volume, covering both low and high elevations.
After collecting a sufficient number of samples, press the Calculate button under the Calibration section.
After wanding throughout all areas of the volume, consult the each 2D view from the Camera Preview Pane to evaluate individual camera coverage. Each camera should be thoroughly covered with wand samples. If there are any large gaps, attempt to focus wanding on those to increase coverage. When sufficient amounts of calibration samples are collected by each camera, press Calculate in the Calibration Pane, and Motive will start calculating the calibration for the capture volume. Generally, 2,000 - 5,000 samples are enough.
Wanding more than the recommended amount will not necessarily aid in the accuracy of your calibration. Eventually there is a diminishing return with wanding samples, upward ranges of samples can actually cause calibrations to result in poor data.
Wanding Tips
Avoid waving the wand too fast. This may introduce bad samples.
Avoid wearing reflective clothing or accessories while wanding. This can introduce extraneous samples which can negatively affect the calibration result.
Try not to collect samples beyond 10,000. Extra samples could negatively affect the calibration.
Try to collect wanding samples covering different areas of each camera view. The status indicator on Prime cameras can be used to monitor the sample coverage on individual cameras.
Although it is beneficial to collect samples all over the volume, it is sometimes useful to collect more samples in the vicinity of the target regions where more tracking is needed. By doing so, calibration results will have a better accuracy in the specific region.
Marker Labeling Mode
When performing calibration wanding, please make sure the Marker Labeling Mode is set to the default Passive Markers Only setting. This setting can be found under Application Settings: Application Settings → Live-Reconstruction tab → Marker Labeling Mode. There are known problems with wanding in one of the active marker labeling modes. This applies for both passive marker calibration wands and IR LED wands.
For Prime series cameras, the LED indicator ring displays the status of the wanding process. As soon as the wanding is initiated, the LED ring will turn dark, and then green lights will fill up around the ring as the camera collects the sample data from the calibration wand.
Eventually, the ring will be filled with green light when sufficient amount of samples are collected. A single LED will glow blue if the calibration wand is detected by the camera, and the clock position of the blue light will indicate the respective wand location in the Camera Preview pane.
For more information, please visit our Camera Status Indicators documentation page.
After sufficient marker samples have been collected, press Calculate to calibrate using collected samples. The time needed for the calibration calculation varies depending on the number of cameras included in the setup as well as the amount of collected samples.
Immediately after clicking calculate, the samples window will turn into the solver window. It will display the solver stage at the top, followed by the overall result rating and the overall quality selection. The overall result rating is the lowest rating of any one camera in the volume. The overall quality selection shows the current solver quality.
Calibration details can be reviewed for recorded Takes. Select a Take in the Data pane, and related calibration results will be displayed under the Properties pane. This information is available only for Takes recorded in Motive 1.10 and above.
After going through the calculation, a Calibration Result Report will pop up, and detailed information regarding the calibration will be displayed. The Calibration Result is directly related to the mean error, and will update, and the calibration result tiers are (on order from worst to best): Poor, Fair, Good, Great, Excellent, and Exceptional. If the results are acceptable, press Apply to use the result. If not, press cancel and repeat the wanding process. It is recommended to save your calibration file, for later use.
After the calculation has completed, you will see cameras displayed in the 3D view pane of Motive. However, the constructed capture volume in Motive will not be aligned with the coordinate plane yet. This is because the ground plane is not set. If calibration results are acceptable, proceed to setting the ground plane.
The final step of the calibration process is setting the ground plane and the origin. This is accomplished by placing the calibration square in your volume and telling Motive where the calibration square is. Place the calibration square inside the volume where you want the origin to be located and the ground plane to be leveled to. The position and orientation of the calibration square will be referenced for setting the coordinate system in Motive. Align the calibration square so that it references the desired axis orientation.
The longer leg on the calibration square will indicate the positive z axis, and shorter leg will indicate the direction of the positive x axis. Accordingly, the positive y axis will automatically be directed upward in a right-hand coordinate system. Next step is to use the level indicator on the calibration square to ensure the orientation is horizontal to the ground. If any adjustment is needed, rotate the nob beneath the markers to adjust the balance of the calibration square.
If you wish to adjust position and orientation of the global origin after the capture has been taken, you can apply the capture volume translation and rotation from the Calibration pane. After the modification has been applied, new set of 3D data must be reconstructed from the recorded 2D data.
After confirming that the calibration square is properly placed, open the Ground Plane tab from the Calibration Pane. Select the three calibration square markers in the 3D Perspective View. When the markers are selected, press Set Ground Plane to reorient the global coordinate axis in respect to the calibration square. After setting the ground plane, Motive will ask to save the calibration data, CAL.
Duo/Trio Tracking Bars: The global origin of the tracking bars can be adjusted by using a calibration square and the Coordinate System Tools in Motive.
The Vertical Offset setting in the Calibration pane is used to compensate for the offset distance between the center of markers on the calibration square and the actual ground. Defining this value takes account of the offset distance and sets the global origin slightly below the markers. Accordingly, this value should correspond to the actual distance between the center of the marker and the lowest tip at the vertex of the calibration square. When a calibration square is detected in Motive, it will recognize the type of the square used and automatically set the offset value. This setting can also be used when you want to place the ground plane at a specific elevation. A positive offset value will place the plane below the markers, and a negative value will place the plane above the markers.
Ground Plane Refinement feature is used to improve the leveling of the coordinate plane. To refine the ground plane, place several markers with a known radius on the ground, and adjust the vertical offset value to the corresponding radius. You can then select these markers in Motive and press Refine Ground Plane, and it will refine the leveling of the plane using the position data from each marker. This feature is especially useful when establishing a ground plane for a large volume, because the surface may not be perfectly uniform throughout the plane.
Calibration files can be used to preserve calibration results. The information from the calibration is exported or imported via the CAL file format. Calibration files reduce the effort of calibrating the system every time you open Motive. These can also be stored within the project so that it can be loaded whenever a project is accessed. By default, Motive loads the last calibration file that was created, this can be changed via the Application Settings.
Note that whenever there is a change to the system setup, these calibration files will no longer be relevant and the system will need to be recalibrated.
The continuous calibration feature continuously monitors and refines the camera calibration to its best quality. When enabled, minor distortions to the camera system setup can be adjusted automatically without wanding the volume again. In other words, you can calibrate a camera system only once and you will no longer have to worry about external distortions such as vibrations, thermal expansion on camera mounts, or small displacements on the cameras. For detailed information, read through the Continuous Calibration page.
The Continuous Calibration can be enabled under the Reconstruction tab in the Application Settings.
Disabled
Continuous Calibration is disabled.
Continuous
In this mode, the Continuous Calibration is enabled and Motive is continuously optimizing the camera calibration. This mode will accommodate only the minor changes, such as vibrations, thermal expansions, or minor drifts in positions and orientations of the cameras.
Continuous + Bumped Camera
This feature also allows Motive to continuously monitor the system calibration. Unlike the standard Continuous Calibration, this mode can adjust the system calibration to even drastic changes in positions and orientations of cameras. If you have cameras displaced a lot, set this setting to the bumped camera mode and Motive will accommodate for the change and reposition the bumped camera. For maintaining the calibration quality, just use the continuous mode.
When capturing throughout a whole day, temperature fluctuations may degrade calibration quality and you will want to recalibrate the capture volume at different times of the day. However, repeating entire calibration process could be tedious and time-consuming especially with a high camera count setup. In this case, instead of repeating the entire calibration process, you can just record Takes with the wand waves and the calibration square, and use the take to re-calibrate the volume in the post-processing. This offline calibration can save calibration calculation time on the capture day because you can process the recorded wanding take in the post-processing instead. Also, the users can inspect the collected capture data and decide to re-calibrate the recorded Take only when any signs of degraded calibration quality is seen from the captures.
Offline Calibration Steps
1) Capture wanding/ground plane takes. At different times of the day, record wanding Takes that closely resembles the calibration wanding process. Also record corresponding ground plane Takes with calibration square set in the volume for defining the ground plane.
2) Load the recorded Wanding Take. If you wish to re-calibrate the cameras for captured Takes during playback, load the wanding take that was recorded around the same time.
3) Motive: Calibration pane. In the Edit mode, press Start Wanding. The wanding samples from recorded 2D data will be loaded.
4) Motive: Calibration pane. Press Calculate, and wait until the calculation process is complete.
5) Motive: Calibration pane. Apply Result and export the calibration file. File tab → Export Camera Calibration.
6) Load the recorded Ground Plane Take.
7) Open the saved calibration file. With the Ground Plane Take loaded in Motive, open the exported calibration file, and the saved camera calibration will be applied to the ground plane take.
8) Motive: Perspective View. From 2D data of the Ground Plane Take, select the calibration square markers.
9) Motive: Calibration pane: Ground Plane. Set the Ground plane.
10) Motive: Perspective View. Switch back to the Live mode. The recorded Take is now re-calibrated.
Whenever a system is calibrated, a Calibration Wanding file gets saved and it could be used to reproduce the calibration file through the offline calibration process
The partial calibration feature allows you to update the calibration for some selection of cameras in a system. The way this feature works is by updating the position of the selected cameras relative to the already calibrated cameras. This means that you only need to wand in front of the selected cameras as long as there is at least one unselected camera that can also see the wand samples.
This feature is especially helpful for high camera count systems where you only need to adjust a few cameras instead of re-calibrating the whole system. One common way to get into this situation is by bumping into a single camera. Partial calibrations allow you to quickly re-calibrate the single bumped camera that is now out of place. This feature is also useful for those who need to do a calibration without changing the location of the ground plane. The reason the ground plane does not need to be reset is because as long as there is at least one unselected camera Motive can use that camera to retain the position of the ground plane relative to the cameras.
Partial Calibration Steps
From the Devices pane, select the camera that has been moved or added.
Open the Calibration Pane.
Set Calibration Type: In most cases you will want to set this to Full, but if the camera only moved slightly Refine works as well.
Specify the wand type.
From the Calibration Pane, click Start Wanding. A pop-up dialogue will appear indicating that only selected cameras are being calibrated.
Choose Calibrate Selected Cameras from the dialogue window.
Wave the calibration wand mainly within the view of the selected cameras.
Click Calculate. At this point, only the selected cameras will have their calibration updated.
Notes:
This feature relies on the fact that the unselected cameras are in a good calibration state. If the unselected cameras are out of calibration, then using this feature will return bad calibration.
Partial calibration does not update the calibration of unselected cameras. However, the calibration report that Motive provides does include all cameras that received samples, selected or unselected.
The partial calibration process can also be used for adding new cameras onto existing calibration. Use Full calibration type in this case.
The OptiTrack motion capture system is designed to track retro-reflective markers. However, active LED markers can also be tracked with appropriate customization. If you wish to use Active LED markers for capture, the system will ideally need to be calibrated using an active LED wand. Please contact us for more details regarding Active LED tracking.
A Motive Body license can export tracking data into FBX files for use in other 3D pipelines. There are two types of FBX files: Binary FBX and ASCII FBX.
Notes for MotionBuilder Users
When exporting tracking data onto the MotionBuilder in FBX file format, make sure the exported frame rate is supported in MotionBuilder (Mobu). In Mobu, there is a select set of playback frame rate that's supported, and rate of the exported FBX file must agree in order to playback the data properly.
If there is a non-standard frame rate selected that is not supported, the closest supported frame rate is applied.
For more information, please visit Autodesk Motionbuilder's Documentation Support site.
Autodesk has discontinued support for FBX ASCII import in MotionBuilder 2018 and above. For alternatives when working in MotionBuilder, please see the Autodesk MotionBuilder: OptiTrack Optical Plugin page.
Exported FBX files in ASCII format can contain reconstructed marker coordinate data as well as 6 Degree of Freedom data for each involved asset depending on the export setting configurations. ASCII files can also be opened and edited using text editor applications.
FBX ASCII Export Options
Binary FBX files are more compact than ASCII FBX files. Reconstructed 3D marker data is not included within this file type, but selected Skeletons are exported by saving corresponding joint angles and segment lengths. For Rigid Bodies, positions and orientations at the defined Rigid Body origin are exported.
FBX Binary Export Options
Captured tracking data can be exported into a Track Row Column (TRC) file, which is a format used in various mocap applications. Exported TRC files can also be accessed from spreadsheet software (e.g. Excel). These files contain raw output data from capture, which include positional data of each labeled and unlabeled marker from a selected Take. Expected marker locations and segment orientation data are not be included in the exported files. The header contains basic information such as file name, frame rate, time, number of frames, and corresponding marker labels. Corresponding XYZ data is displayed in the remaining rows of the file.
Captured tracking data can be exported in Comma Separated Values (CSV) format. This file format uses comma delimiters to separate multiple values in each row, and it can be imported by spreadsheet software or a programming script. Depending on which data export options are enabled, exported CSV files can contain marker data, Rigid Body data, and/or Skeleton data. CSV export options are listed in the following charts:
CSV Options | Description |
---|---|
The quality stats display the reliability of associated marker data. Errors per marker lists average displacement between detected markers and expected marker locations within corresponding assets. Marker Quality values rate how well camera rays converged when the respective marker was reconstructed. The value varies from 0 (unstable marker) to 1 (accurate marker).
When the header is disabled, this information will be excluded from the CSV files. Instead, the file will have frame IDs in the first column, time data on the second column, and the corresponding mocap data in the remaining columns.
CSV Headers
TIP: Occlusion in the marker data
When there is an occlusion in a marker, the CSV file will contain blank cells. This can interfere when running a script to process the CSV data. It is recommended to optimize the system setup to reduce occlusions. To omit unnecessary frame ranges with frequent marker occlusions, select the frame range with the most complete tracking results. Another solution to this is to use Fill Gaps to interpolate missing trajectories in post-processing.
For Takes containing force plates (AMTI or Bertec) or data acquisition (NI-DAQ) devices, additional CSV files will be exported for each connected device. For example, if you have two force plates and a NI-DAQ device in the setup, total 4 CSV files will be saved when you export the tracking data from Motive. Each of the exported CSV files will contain basic properties and settings at its header, including device information and sample counts. Also, mocap frame rate to device sampling rate ratio is included since force plate and analog data are sampled at higher sampling rates.
Please note that device data is usually sampled at a higher rate then the camera system. In this case, camera samples are collected at the center of the device data samples that were collected in this period. For example, if device data has 9 sub-frame for each camera frame sample, the tracking data will be at every 5th of device data.
Force Plate Data: Each of the force plate CSV files will contain basic properties such as platform dimensions and mechanical-to-electrical center offset values. The mocap frame number, force plate sample number, forces (Fx/Fy/Fz), moments (Mx, My, Mz), and location of the center of pressure (Cx, Cy, Cz) will be listed below the header.
Analog Data: Each of the analog data CSV files contains analog voltages from each configured channel.
Motive can export tracking data in BioVision Hierarchy (BVH) file format. Exported BVH files do not include individual marker data. Instead, a selected skeleton is exported using hierarchical segment relationships. In a BVH file, the 3D location of a primary skeleton segment (Hips) is exported, and data on subsequent segments are recorded by using joint angles and segment parameters. Only one skeleton is exported for each BVH file, and it contains the fundamental skeleton definition that is required for characterizing the skeleton in other pipelines.
Notes on relative joint angles generated in Motive: Joint angles generated and exported from Motive are intended for basic visualization purposes only and should not be used for any type of biomechanical or clinical analysis.
General Export Options
Option | Description |
---|---|
BVH Specific Export Options
Option | Description |
---|---|
This page provides basic description of marker labels and instructions on labeling workflow in Motive.
Marker Label
Marker labels are basically software name tags that are assigned to trajectories of reconstructed 3D markers so that they can be referenced for tracking individual markers, Rigid Bodies, or Skeletons. Motive identifies marker trajectories using the assigned labels. Labeled trajectories can be exported individually, or combined together to compute positions and orientations of the tracked objects. In most applications, all of the target 3D markers will need to be labeled in Motive. There are two methods for labeling markers in Motive: auto-labeling and manual labeling, and both labeling methods will be covered in this page.
Monitoring Labels
Labeled or unlabeled trajectories can be identified and resolved from the following places in Motive:
3D Perspective Viewport: From the 3D viewport, check the Marker Labels in the visual aids option to view marker labels for selected markers.
Labels pane: The Labels pane lists out all of the marker labels and corresponding percentage gap for each label. The color of the label also indicates whether if the label is present or missing at the current frame.
Graph View pane: For frames where the selected label is not assigned to any markers, the timeline scrubber gets highlighted in red. Also, the tracks view of this pane provides a list of labels and their continuity in a captured Take.
There are two approaches to labeling markers in Motive:
Auto-label pipeline: Automatically label sets of Rigid Body markers and Skeleton markers using calibrated asset definitions.
Manual Label: Manually label individual markers using the Labels pane.
For tracking Rigid Bodies and Skeletons, Motive can use the asset definitions to automatically label associated markers both in real-time and post-processing. The auto-labeler uses references assets that are enabled, or assets that are checked in the Assets pane, to search for a set of markers that matches with the definition and assign pre-defined labels throughout the capture.
There are times, however, when it is necessary to manually label a section or all of a trajectory, either because the markers of a Rigid Body or a Skeleton were misidentified (or unidentified) during capture or because individual markers need to be labeled without using any tracking assets. In these cases, the Labels pane in Motive is used to perform manual labeling of individual trajectories. Manual labeling workflow is supported only in post-processing of capture when a Take file (TAK) has been loaded with 3D data as its playback type. In case of 2D data only capture, the Take must be Reconstructed first in order to assign, or edit, the marker labels in its 3D data. This manual labeling process, along with 3D data editing is typically referred to as post processing of mocap data.
Rigid body and Skeleton asset definitions contain information of marker placements on corresponding assets. This is recorded when the assets are first created, and the auto-labeler in Motive uses them to label a set of reconstructed 3D trajectories that resemble marker arrangements of active assets. Once all of the markers on active assets are successfully labeled, corresponding Rigid Bodies and Skeletons get tracked in the 3D viewport.
The auto-labeler runs in real-time during Live mode and the marker labels get saved onto the recorded TAKs. Running the auto-labeler again in post-processing will basically attempt to label the Rigid Body and Skeleton markers again from the 3D data.
From Data pane
Select Takes from the Data pane
Right-click to bring up the context menu
Click reconstruct and auto-label' to process selected Takes. The this pipeline will create a new set of 3D data and auto-label the markers from it.
This will label all the markers that matches the corresponding asset definition.
The settings for the auto-labeling engine are defined in the Auto-labeler section of the Reconstruction pane. The auto-labeler parameters can be modified during post-processing pipelines, and they can be optimized for stable labeling of markers throughout the Take.
Note: Be careful when reconstructing a Take again either by Reconstruct or Reconstruct and Auto-label, because it will overwrite the 3D data and any post-processing edits on trajectories and marker labels will be discarded. Also, for Takes involving Skeleton assets, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again if the Skeletons are never in well-trackable poses throughout the captured Take. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
The Marker Set is a type of assets in Motive. It is the most fundamental method of grouping related markers, and this can be used to manually label individual markers in post-processing of captured data using the Labeling pane. Note that Marker Sets are used for manual labeling only. For automatic labeling during live mode, a Rigid Body asset or a Skeleton asset is necessary.
Since creating rigid bodies, or skeletons, groups the markers in each set and automatically labels them, Marker Sets are not commonly used in the processing workflow. However, they are still useful for marker-specific tracking applications or when the marker labeling is done in pipelines other than auto-labeling. Also, marker sets are useful when organizing and reassigning the labels.
The Labels pane is used to assign, remove, and edit marker labels in the 3D data. The Tracks View under the Graph View pane can be used in conjunction with the Labels pane to monitor which markers and gaps are associated. The Labels pane is also used to examine the number of occluded gaps in each label, and it can be used along with the Editing Tools for complete post-processing.
For a given frame, all labels are color-coded. For each frame of 3D data, assigned marker labels are shown in white, labels without reconstructions are shown in red, and unlabeled reconstructions are shown in orange; similar to how they are presented in the 3D View.
See the Labels pane page for detailed explanation on each option.
The QuickLabel mode allows you to tag labels with single-clicks in the view pane, and it is a handy way to reassign or modify marker labels throughout the capture. When the QuickLabel mode is toggled, the mouse cursor switches to a finger icon with the selected label name attached next to it. Also, when the display label option is enabled in the perspective view, all of assigned marker labels will be displayed next to each marker in the 3D viewport, as shown in the image below. Select the marker set you wish to label, and tag the appropriate labels to each marker throughout the capture.
When assigning labels using the Quick Label Mode, the labeling scope is configured from the labeling range settings. You can restrict the labeling operation to apply from the current frame backward, current frame forward, or both depending on the trajectory. You may also restrict labeling operations to apply the selected label to all frames in the Take, to a selected frame range, or to a trajectory 'fragment' enclosed by gaps or spikes. The fragment/spike setting is used by default and this best identifies mislabeled frame ranges and assigns marker labels. See the Labels pane page for details on each feature.
Under the drop-down menu in the Labels pane, select an asset you wish to label.
All of the involved markers will be displayed under the columns.
From the label list, select unlabeled or mislabeled markers.
Inspect the behavior of the selected trajectory and decide whether you want to apply the selected label to frames forward or frames backward or both. This option can be selected from labeling range settings on the Labels pane.
Hiding Marker Labels
If the marker labels are set to visible in the 3D viewport, Motive will show all of the marker labels when entering the QuickLabel mode. To hide all of the marker labels from showing up in the viewport, you can click on the visual aids option in the perspective view, and uncheck marker labels.
The following section provides the general labeling steps in Motive. Note that the labeling workflow is flexible and alternative approaches to the steps listed in this section could also be used. Utilize the auto-labeling pipelines in combination with the Labels pane to best reconstruct and label the 3D data of your capture.
Labeling Tips
Use the Graph View pane to monitor occlusion gaps and labeling errors as you post-process capture Takes
When using the Labeling pane, choose the most appropriate labeling setting (all, selected, spike, or fragment) to efficiently label selected trajectories. See more from the Labeling pane page.
Hotkeys can increase the speed of the workflow. Use Z and Shift+Z hotkeys to quickly find gaps in the selected trajectory.
When working with skeleton assets, label the hip segment first. The hip segment is the main parent segment, top of the segments hierarchy, where all other child segments are associated to. Manually assigning hip markers sometimes help the auto-labeler to label the entire asset.
For skeleton assets, the Show Tracking Errors property can be utilized to display tracking errors on skeleton segments.
Step 1. In the Data pane, Reconstruct and auto-label the take with all of the desired assets enabled.
Step 2. In the Graph View pane, examine the trajectories and navigate to the frame where labeling errors are frequent.
Step 3. Open the Labels pane.
Step 4. Select an asset that you wish to label.
Step 5. From the label columns, Click on a marker label that you wish to re-assign.
Step 6. Inspect behavior of a selected trajectory and its labeling errors and set the appropriate labeling settings (allowable gap size, maximum spike and applied frame ranges).
Step 7. Switch to the QuickLabel mode (Hotkey: D).
Step 8. On the Perspective View, assign the labels onto the corresponding marker reconstructions by clicking on them.
Step 9. When all markers have been labeled, switch back to the Select Mode.
Step 1. Start with 2D data of a captured Take with model assets (Skeletons and Rigid Bodies).
Step 2. Reconstruct and Auto-Label, or just Reconstruct, the Take with all of the desired assets enabled under the Assets pane. If you use reconstruct only, you can skip step 3 and 5 for the first iteration.
Step 3. Examine the reconstructed 3D data, and inspect the frame range where markers are mislabeled.
Step 4. Using the Labels pane, manually fix/assign marker labels, paying attention to your label settings (direction, max gap, max spike, selected duration).
Step 5. Unlabel all trajectories you want to re-auto-label.
Step 6. Auto-Label the Take again. Only the unlabeled markers will get re-labeled, and all existing labels will be kept the same.
Step 7. Re-examine the marker labels. If some of the labels are still not assigned correctly from any of the frames, repeat the steps 3-6 until complete.
The general process for resolving labeling error is:
Identify the trajectory with the labeling error.
Determine if the error is a swap, an occlusion, or unlabeled.
Resolve the error with the correct tool.
Swap: Use the Swap Fix tool ( Edit Tools ) or just re-assign each label ( Labels panel ).
When manually labeling markers to fix swaps, set appropriate settings for the labeling direction, max spike, and selected range settings.
Occlusion: Use the Gap Fill tool ( Edit Tools ).
Unlabeled: Manually label an unlabeled trajectory with the correct label ( Labels panel ).
For more data editing options, read through the Data Editing page.
Following tutorials use Motive 1.10. On Motive 2.0., Data pane and Assets pane is used instead of the Project pane.
When recorded 3D data have been labeled properly and entirely throughout the Take, you will not need to edit marker labels. If you don't have 3D data recorded, you can reconstruct and auto-label the Take to obtain 3D data and label all of the skeleton and rigid body markers. If all of the markers are well reconstructed and there are no significant occlusions, auto-labeled 3D data may be acceptable right away. In this case, you can proceed without post-processing of marker labels.
Recorded 3D data has no gaps in the labels, or the Reconstruct and Auto-label works perfectly the first time without additional post-processing.
Examine the Take(s). Check the Labeling pane, or the tracks view, to make sure no occlusion exists within the capture, and all markers are consistently labeled.
Done.
When skeleton markers are mislabeled only within specific frame ranges of a Take, you will have to manually re-label the markers. This may occur when a subject performs dynamic movements or come into contact with another object during the recorded Take. After correcting the mislabeled markers, you can also use the auto-labeler to assign remaining missing labels.
Start with recorded 3D data or Reconstruct and auto-label the Take to obtain newly labeled 3D data.
Inspect the Take to pick out the frame ranges with bad tracking.
If markers are mislabeled during majority of the capture, unlabel all markers from the entire capture by right-clicking on the Take in Data Management pane and click Delete Marker Labels. You can do this on selected frame ranges as well.
Scrub the timeline to a frame just before the bad tracking frame range.
Using the Labeling pane, manually label the skeleton. Depending on the severity of the mislabels, you can either label the entire skeleton or just the key segments starting from the hip.
Scrub the timeline to a frame after the bad tracking frame range.
Manually label the same skeleton.
Auto-label the Take.
Check the frames again and correct any remaining mislabels using the Labeling pane.
For Take(s) where skeletons are never perfectly tracked and the markers are consistently mislabeled, you will need to manually assign the correct labels for the skeleton asset(s). Situations like this could happen when the skeleton(s) are never in an easily trackable pose throughout the Take (e.g. captures where the actors are rolling on the ground). It is usually recommended that all skeleton ‘’Takes’’’ start and end with T-pose in order to easily distinguish the skeleton markers.
This also helps the skeleton solver to correctly auto-label the associated markers; however, in some cases, only specific section of a Take needs be trimmed out, or including the calibration poses might not be possible. Manually assigning labels can help the auto-labeler to correctly label markers and have skeletons acquire properly in a Take.
You will get best results if you manually label the entire skeleton, but doing so can be time-consuming. You can also label only the mislabeled segment or the key segment (hip bone) and run the auto-labeler to see if it correctly assigns the labels with the small help.
Start with recorded 3D data or Reconstruct the Take.
At a certain point of the Take (usually at a frame where you can best identify the pose of the skeleton), use the Labeling pane to manually assign the marker labels for skeletons that are not labeling correctly. Depending on the severity of the mislabels, you can either label the entire skeleton or only the key segments starting from the hip.
After manually assigning the labels, auto-label the Take. Make sure the corresponding assets are enabled in the Assets pane.
Check to see if all markers are correctly assigned throughout the take. If not, re-label or unlabel, any mislabeled markers and run auto-label again if needed.
Marker occlusions can be critical to the auto-labeling process. After having a gap for multiple frames, occluded markers can be unlabeled entirely, or nearby reconstructions can be mistakenly recognized as the occluded marker and result in labeling swaps or mislabels. Skeleton and rigid body asset definitions may accommodate labeling for such occlusions, but in some cases, labeling errors may persist throughout the Take. The following steps can be used to re-assign the labels in this case.
If tracked markers are relatively stationary during the occluded frames, you may want to increase the Maximum Marker Label Gap value under the Auto-Labeler settings in the Reconstruction pane to allow the occluded marker to maintain its label after auto-labeling the Take. However, note that adjusting this setting will not be useful if the marker is moving dynamically beyond the Prediction Radius (mm) settings during occlusion.
Start with recorded 3D data or Reconstruct and auto-label the Take
Examine through the Take, and go to a frame where markers are mislabeled right after an occlusion.
Using the Quick Label Mode, correct the labeling errors.
Move onto next occluded frames. When the marker reappears, correct the labels.
After correcting the labels, Auto-label the Take again.
Use the Fill Gaps tool in the Editing tools to interpolate the occluded trajectories.
This page provides information and instructions on how to utilize the Probe Measurement Kit.
Measurement probe tool utilizes the precise tracking of OptiTrack mocap systems and allows you to measure 3D locations within a capture volume. A probe with an attached Rigid Body is included with the purchased measurement kit. By looking at the markers on the Rigid Body, Motive calculates a precise x-y-z location of the probe tip, and it allows you to collect 3D samples in real-time with sub-millimeter accuracy. For the most precise calculation, a probe calibration process is required. Once the probe is calibrated, it can be used to sample single points or multiple samples to compute distance or the angle between sampled 3D coordinates.
Measurement kit includes:
Measurement probe
Calibration block with 4 slots, with approximately 100 mm spacing between each point.
This section provides detailed steps on how to create and use the measurement probe. Please make sure the camera volume has been calibrated successfully before creating the probe. System calibration is important on the accuracy of marker tracking, and it will directly affect the probe measurements.
Creating a probe using the Builder pane
Open the Builder pane under View tab and click Rigid Bodies.
Bring the probe out into the tracking volume and create a Rigid Body from the markers.
Under the Type drop-down menu, select Probe. This will bring up the options for defining a Rigid Body for the measurement probe.
Select the Rigid Body created in step 2.
Place and fit the tip of the probe in one of the slots on the provided calibration block.
Note that there will be two steps in the calibration process: refining Rigid Body definition and calibration of the pivot point. Click Create button to initiate the probe refinement process.
Slowly move the probe in a circular pattern while keeping the tip fitted in the slot; making a cone shape overall. Gently rotate the probe to collect additional samples.
After the refinement, it will automatically proceed to the next step; the pivot point calibration.
Repeat the same movement to collect additional sample data for precisely calculating the location of the pivot or the probe tip.
When sufficient samples are collected, the pivot point will be positioned at the tip of the probe and the Mean Tip Error will be displayed. If the probe calibration was unsuccessful, just repeat the calibration again from step 4.
Once the probe is calibrated successfully, a probe asset will be displayed over the Rigid Body in Motive, and live x/y/z position data will be displayed under the Probe pane.
Caution
The probe tip MUST remain fitted securely in the slot on the calibration block during the calibration process.
Also, do not press in with the probe since the deformation from compressing could affect the result.
Using the Probe pane for sample collection
Under the Tools tab, open the Probe pane.
Place the probe tip on the point that you wish to collect.
Click Take Sample on the Measurement pane.
A Virtual Reference point is constructed at the location and the coordinates of the point are displayed in the Probe Pane. The points location can be exported from the Probe Pane as a .CSV file.
Collecting additional samples will provide distance and angles between collected samples.
You can also use the probe samples to reorient the coordinate axis of the capture volume. The set origin button will position the coordinate space origin at the tip of the probe. And the set orientation option will reorient the capture space by referencing to three sample points.
As the samples are collected, their coordinate data will be written out into the CSV files automatically into the OptiTrack documents folder which is located in the following directory: C:\Users\[Current User]\Documents\OptiTrack. 3D positions for all of the collected measurements and their respective RMSE error values along with distances between each consecutive sample point will be saved in this file.
Also, If needed, you can trigger Motive to export the collected sample coordinate data into a designated directory. To do this, simply click on the export option on the Probe pane.
The location of the probe tip can also be streamed into another application in real-time. You can do this by data-streaming the probe Rigid Body position via NatNet. Once calibrated, the pivot point of the Rigid Body gets positioned precisely at the tip of the probe. The location of a pivot point is represented by the corresponding Rigid Body x-y-z position, and it can be referenced to find out where the probe tip is located.
The Data Streaming settings can be found by selecting the by selecting View > Data Streaming Pane.
Select the network interface address for streaming data.
Select desired data types to stream under streaming options.
When streaming skeletons, set the appropriate bone naming convention for client application.
Check Broadcast Frame Data at the top.
Configure streaming settings and designate the corresponding IP address from client applications
Stream live or playback captures
Firewall or anti-virus software can block network traffic, so it is important to make sure these applications are disabled or configured to allow access to both server (Motive) and Client applications.
Before starting to broadcast data onto the selected network interface, define which data types to stream. Under streaming options, there are settings where you can include or exclude specific data types and syntax. Set only the necessary criteria to true. For most applications, the default settings will be appropriate.
When streaming skeleton data, bone naming convention formats annotations for each segment when data is streamed out. Appropriate convention should be configured to allow client application to properly recognize segments. For example, when streaming to Autodesk pipelines, the naming convention should be set to FBX.
NatNet is a client/server networking protocol which allows sending and receiving data across a network in real-time. It utilizes UDP along with either Unicast or Multicast communication for integrating and streaming reconstructed 3D data, rigid body data, and skeleton data from OptiTrack systems to client applications. Within the API, a class for communicating with OptiTrack server applications is included for building client protocols. Using the tools provided in the NatNet API, capture data can be used in various application platforms. Please refer to the NatNet User Guide For more information on using NatNet and its API references.
Rotation conventions
NatNet streams rotational data in quaternions. If you wish to present the rotational data in the Euler convention (pitch-yaw-roll), the quaternions data need to be converted into Euler angles. In the provided NatNet SDK samples, the SampleClient3D application converts quaternion rotations into Euler rotations to display in the application interface. The sample algorithms for the conversion are scripted in the NATUtils.cpp file. Refer to the NATUtils.cpp file and the SampleClient3D.cpp file to find out how to convert quaternions into Euler conventions.
XML Triggering Port: Command Port (Advanced Network Settings) + 2. This defaults to 1512 (1510 + 2).Tip: Within the NatNet SDK sample package, there is are simple applications (BroadcastSample.cpp (C++) and NatCap (C#)) that demonstrates a sample use of XML remote trigger in Motive.
XML syntax for the start / stop trigger packet
Capture Start Packet
Capture Stop Packet
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left
If you wish to change the location and orientation of the global axis, you can use the Coordinate Systems Tool which can be found under the Tools tab.
Adjusting the Coordinate System Steps
First set place the calibration square at the desired origin.
[Motive] Open the Coordinate System Tools pane under the Tools tab.
[Motive] Click the Set Ground Plane button from the Coordinate System Tools pane, and the global origin will be adjusted.
The Motive Batch Processor is a separate stand-alone Windows application, built on the new NMotive scripting and programming API, that can be utilized to process a set of Motive Take files via IronPython or C# scripts. While the Batch Processor includes some example script files, it is primarily designed to utilize user-authored scripts.
Initial functionality includes scripting access to file I/O, reconstructions, high-level Take processing using many of Motive's existing editing tools, and data export. Upcoming versions will provide access to track, channel, and frame-level information, for creating cleanup and labeling tools based on individual marker reconstruction data.
Motive Batch Processor Scripts make use of the NMotive .NET class library, and you can also utilize the NMotive classes to write .NET programs and IronPython scripts that run outside of this application. The NMotive assembly is installed in the Global Assembly Cache and also located in the assemblies
sub-directory of the Motive install directory. For example, the default location for the assembly included in the 64-bit Motive installer is:
C:\Program Files\OptiTrack\Motive\assemblies\x64
The full source code for the Motive Batch Processor is also installed with Motive, at:
C:\Program Files\OptiTrack\Motive\MotiveBatchProcessor\src
You are welcome to use the source code as a starting point to build your own applications on the NMotive framework.
Requirements
A batch processor script using the NMotive API. (C# or IronPython)
Take files that will be processed.
Steps
First, select and load a Batch Processor Script. Sample scripts for various pipelines can be found in the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder.
Load the captured Takes (TAK) that will be processed using the imported scripts.
Click Process Takes to batch process the Take files.
Reconstruction Pipeline
A class reference in Microsoft compiled HTML (.chm) format can be found in the Help
sub-directory of the Motive install directory. The default location for the help file (in the 64-bit Motive installer) is:
C:\Program Files\OptiTrack\Motive\Help\NMotiveAPI.chm
The Motive Batch Processor can run C# and IronPython scripts. Below is an overview of the C# script format, as well as an example script.
A valid Batch Processor C# script file must contain a single class implementing the ItakeProcessingScript
interface. This interface defines a single function:
Result ProcessTake( Take t, ProgressIndicator progress )
.
Result, Take, and ProgressIndicator
are all classes defined in the NMotive
namespace. The Take object t
is an instance of the NMotive Take
class. It is the take being processed. The progress
object is an instance of the NMotive ProgressIndicator
and allows the script to update the Batch Processor UI with progress and messages. The general format of a Batch Processor C# script is:
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are multiple C# (.cs) sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your C# script file must have a '.cs' extension.
Included sample script pipelines:
ExporterScript - BVH, C3D, CSV, FBXAscii, FBXBinary, TRC
TakeManipulation - AddMarker, DisableAssets, GapFill, MarkerFilterSCript, ReconstructAutoLabel, RemoveUnlabeledMarkers, RenameAsset
Your IronPython script file must import the clr module and reference the NMotive assembly. In addition, it must contain the following function:
def ProcessTake(Take t, ProgressIndicator progress)
The following illustrates a typical IronPython script format.
In the [Motive Directory]\MotiveBatchProcessor\ExampleScripts\
folder, there are sample scripts that demonstrate the use of the NMotive for processing various different pipelines including tracking data export and other post-processing tools. Note that your IronPython script file must have a '.cs' extension.
It is heavily recommended that you use another audio capture software with timecode to capture and synchronize audio data. Audio capture in Motive is for reference only and is not intended to perfectly align to video or motion capture data.
Take scrubbing is not supported to align with audio recorded within Motive. If you would like the audio to be closely in reference to video and motion capture data, you must play the take from the beginning.
Recorded audio files can be played back from a captured Take or be exported into a WAV audio files. This page details how to record and playback audio in Motive. Before using an audio input device (microphone) in Motive, first make sure that the device is properly connected and configured in Windows.
In Motive, audio recording and playback settings can be accessed from the tab → Audio Settings.
In Motive, open the Audio Settings, and check the box next to Enable Capture.
Select the audio input device that you want to use.
Press the Test button to confirm that the input device is properly working.
Make sure the device format of the recording device matches the device format that will be used in the playback devices (speakers and headsets). This is very important as the recorded audio would not playback if these formats do not match. Most speakers have at least 2 channels, so an input device with 2 channels should be used for recording.
Capture the Take.
In Motive, open a Take that includes audio recordings.
To playback recorded audio from a Take, check the box next to Enable Playback.
Select the audio output device that you will be using.
Make sure the configurations in Device Format closely matches the Take Format. This is elaborated further in the section below.
Play the Take.
In order to playback audio recordings in Motive, audio format of recorded sounds MUST match closely with the audio format used in the output device. Specifically, communication channels and frequency of the audio must match. Otherwise, recorded sound will not be played back.
The recorded audio format is determined by the format of a recording device that was used when capturing Takes. However, audio formats in the input and output devices may not always agree. In this case, you will need to adjust the input device properties to match the take.
Device's audio format can be configured under the Sound settings in Windows. In Sound settings (accessed from Control Panel), select the recording device, click on Properties, and the default format can be changed under the Advanced Tab, as shown in the image below.
There are a variety of different programs and hardware that specialize in audio capture. A not very exhaustive list of examples can be seen below.
Tentacle Sync TRACK E
Adobe Premiere
Avid Media Composer
Etc...
In order to capture audio using a different program, you will need to connect both the motion capture system (through the eSync) and the audio capture device to timecode data (and possibly genlock data). You can then use the timecode information to synchronize the two sources of data for your end product.
The following devices are internally tested and should work for most use cases for reference audio only:
AT2020 USB
MixPre-3 II Digital USB Preamp
This page covers different video modes that are available on the OptiTrack cameras. Depending on the video mode that a camera is configured to, captured frames are processed differently, and only the configured video mode will be recorded and saved in Take files.
Video types, or image-processing modes, available in OptiTrack Cameras
There are different video types, or image-processing modes, which could be used when capturing with OptiTrack cameras. Dending on the camera model, the available modes vary slightly. Each video mode processes captured frames differently at both camera hardware and software level. Furthermore, precision of the capture and required amount of CPU resources will vary depending on the configured video type.
The video types are categorized into either tracking modes (object mode and precision mode) and reference modes (MJPEG and raw grayscale). Only the cameras in the tracking modes will contribute to the reconstruction of 3D data.
Motive records frames of only the configured video types. Video types of the cameras cannot be switched for recorded Takes in post-processing of captured data.
(Tracking Mode) Object mode performs on-camera detection of centroid location, size, and roundness of the markers, and then, respective 2D object metrics are sent to the host PC. In general, this mode is best recommended for obtaining the 3D data. Compared to other processing modes, the Object mode provides smallest CPU footprint and, as a result, lowest processing latency can be achieved while maintaining the high accuracy. However, be aware that the 2D reflections are truncated into object metrics in this mode. The Object mode is beneficial for Prime Series and Flex 13 cameras when lowest latency is necessary or when the CPU performance is taxed by Precision Grayscale mode (e.g. high camera counts using a less powerful CPU).
Supported Camera Models: Prime/PrimeX series, Flex 13, and S250e camera models.
(Tracking Mode) Precision Mode performs on-camera detection of marker reflections and their centroids. These centroid regions of interests are sent to the PC for additional processing and determination of the precise centroid location. This provides high-quality centroid locations but is very computationally expensive and is only recommended for low to moderate camera count systems for 3D tracking when the Object Mode is unavailable.
Supported Camera Models: Flex series, Tracking Bars, S250e, Slim13e, and Prime 13 series camera models.
(Reference Mode) The MJPEG -compressed grayscale mode captures grayscale frames, compressed on-camera for scalable reference video capabilities. Grayscale images are used only for reference purpose, and processed frames will not contribute to the reconstruction of 3D data. The MJPEG mode can run at full frame rate and be synchronized with tracking cameras.
Supported Camera Models: All camera models
(Reference Mode) Processes full resolution, uncompressed, grayscale images. The grayscale mode is designed to be used only for reference purposes, and processed frames will not contribute to the reconstruction of 3D data. Because of the high bandwidth associated with sending raw grayscale frames, this mode is not fully synchronized with other tracking cameras and they will run at lower frame rate. Also, raw grayscale videos cannot be exported out from a recording. Use this video mode only for aiming and monitoring the camera views for diagnosing tracking problems.
Supported Camera Models: All camera models.
From Perspective View
In the perspective view, right-click on a camera from the viewport and set the camera to the desired video mode.
From Cameras View
In the cameras view, right-click on a camera view and change the video type for the selected camera.
Compared to object images that are taken by non-reference cameras in the system, grayscale videos are much bigger in data size, and recording reference video consumes more network bandwidth. High amount data traffic can increase the system latency or cause reductions in the system frame rate. For this reason, we recommend setting no more than one or two cameras to the reference mode. Also, instead of using raw grayscale video, compressed MJPEG grayscale video can be recorded to reduce the data traffic. Reference views can be observed from either the Camera Preview pane or Reference View pane.
Note:
Processing latency can be monitored from the status bar located at the bottom.
Grayscale images are used only for reference purposes, and processed frames will not contribute to reconstruction of 3D data.
The API reports "world-space" values for markers and rigid body objects at each frame. It is often desirable to convert the coordinates of points reported by the API from the world-space (or global) coordinates into the local space of the rigid body. This is useful, for example, if you have a rigid body that defines the world space that you want to track markers within.
Rotation values are reported as both quaternions, and as roll, pitch, and yaw angles (in degrees). Quaternions are a four-dimensional rotation representation that provide greater mathematical robustness by avoiding "gimbal" points that may be encountered when using roll, pitch, and yaw (also known as Euler angles). However, quaternions are also more mathematically complex and are more difficult to visualize, which is why many still prefer to use Euler angles.
There are many potential combinations of Euler angles so it is important to understand the order in which rotations are applied, the handedness of the coordinate system, and the axis (positive or negative) that each rotation is applied about.
These are the conventions used in the API for Euler angles:
Rotation order: XYZ
All coordinates are *right-handed*
Pitch is degrees about the X axis
Yaw is degrees about the Y axis
Roll is degrees about the Z axis
Position values are in millimeters
To create a transform matrix that converts from world coordinates into the local coordinate system of your chosen rigid body, you will first want to compose the local-to-world transform matrix of the rigid body, then invert it to create a world-to-local transform matrix.
To compose the rigid body local-to-world transform matrix from values reported by the API, you can first compose a rotation matrix from the quaternion rotation value or from the yaw, pitch, and roll angles, then inject the rigid body translation values. Transform matrices can be defined as either "column-major" or "row-major". In a column-major transform matrix, the translation values appear in the right-most column of the 4x4 transform matrix. For purposes of this article, column-major transform matrices will be used. It is beyond the scope of this article, but it is just as feasible to use row-major matrices by transposing matrices.
In general, given a world transform matrix of the form: M = [ [ ] Tx ] [ [ R ] Ty ] [ [ ] Tz ] [ 0 0 0 1 ]
where Tx, Tz, Tz are the world-space position of the origin (of the rigid body, as reported from the API), and R is a 3x3 rotation matrix composed as: R = [ Rx (Pitch) ] * [ Ry (Yaw) ] * [ Rz (Roll) ]
where Rx, Ry, and Rz are 3x3 rotation matrices composed according to:
A handy trick to know about local-to-world transform matrices is that once the matrix is composed, it can be validated by examining each column in the matrix. The first three rows of Column 1 are the (normalized) XYZ direction vector of the world-space X axis, column 2 holds the Y axis, and column 3 is the Z axis. Column 4, as noted previously, is the location of the world-space origin. To convert a point from world coordinates (coordinates reported by the API for a 3D point anywhere in space), you need a matrix that converts from world space to local space. We have a local-to-world matrix (where the local coordinates are defined as the coordinate system of the rigid body used to compose the transform matrix), so inverting that matrix will yield a world-to-local transformation matrix. Inversion of a general 4x4 matrix can be slightly complex and may result in singularities, however we are dealing with a special transform matrix that only contains rotations and a translation. Because of that, we can take advantage of the method shown here to easily invert the matrix:
Once the world matrix is converted, multiplying it by the coordinates of a world-space point will yield a point in the local space of the rigid body. Any number of points can be multiplied by this inverted matrix to transform them from world (API) coordinates to local (rigid body) coordinates.
The API includes a sample (markers.sln/markers.cpp) that demonstrates this exact usage.
Hotkeys can be viewed and customized from the panel. The below chart lists only the commonly used hotkeys. There are also other hotkeys and unassigned hotkeys, which are not included in the chart below. For a complete list of hotkey assignments, please check the in Motive.
Fuction | Default Hotkey |
---|
This page provides an explanation on some of the settings that affect how the 3D tracking data is obtained. Most of the related settings can be found under the Live Pipeline tab in the . A basic understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. With that being said, we do not recommend changing these settings as the default settings should work well for most tracking applications.
THR setting under camera properties
Reconstruction is a process of deriving 3D points from 2D coordinates obtained by captured camera images. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated on each captured frame and processed through the solver pipeline in order to be tracked. This process involves trajectorization of detected 3D markers within the calibrated capture volume and the booting process for the tracking of defined assets.
For real-time tracking in Live mode, the settings for this pipeline can be configured from the Live-Pipeline tab in the . For post-processing recorded files in Edit mode, the solver settings can be accessed under corresponding . Note that optimal configurations may vary depending on capture applications and environmental conditions, but for most common applications, default settings should work well.
In this page, we will focus on the and the , which are the key settings that have direct effects on the reconstruction outcome.
To oscillate between camera video types in Motive, click the camera video type icon under Mode in the Devices pane.
We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data.
When a frame of image is captured by a camera, the 2D camera filter is applied. This filter works by judging on the sizes and shapes of the detected reflections or IR illuminations, and it determines which ones can be accepted as markers. Please note that the camera filter settings can be configured in Live mode only because this filter is applied at the hardware level when the 2D frames are first captured. Thus, you will not be able to modify these settings on a recorded Take as the 2D data has already been filtered and saved; however, when needed, you can increase the threshold on the filtered 2D data and perform post-processing reconstruction to recalculate 3D data from the 2D data.
Min/Max Thresholded Pixels
The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.
For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.
Circularity
Object mode vs. Precision Mode
Tracked Ray (Green)
Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.
Untracked Ray (Red)
An untracked ray is a marker ray that fails to contribute to a reconstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.
Minimum Rays to Start / Minimum Rays to Continue
This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.
More Settings
Motive performs real-time reconstruction of 3D coordinates directly from either captured or recorded 2D data. When Motive is live-processing the data, you can examine the marker rays from the viewport, inspect the Live-Pipeline settings, and optimize the 3D data acquisition.
There are two modes where Motive is reconstructing 3D data in real-time:
Live mode (Live 2D data capture)
2D mode (Recorded 2D data)
The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When a capture is recorded in Motive, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the 3D data gets loaded first when a recorded Take file is opened.
Switching to 2D Mode
Applying changes to 3D data
Once the reconstruction/solver settings have been adjusted and optimized on recorded data, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.
The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.
Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.
Also, for Takes involving Skeleton assets, if the Skeletons are never in well-trackable poses throughout the captured Take, the recorded Skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the or in the .
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the of in the .
Highlight, or select, the desired frame range in the Graph pane, and zoom into it using the zoom-to-fit hotkey (F) or the icon.
Set the working range from the Control Deck by inputting start and end frames on the field.
Tracking Data Type | CSV | C3D | FBX | BVH | TRC |
---|---|---|---|---|---|
Assets pane: While the markers are selected in Motive, click on the add button in the Assets pane.
Position and orientation of a tracked Rigid Body can be monitored in real-time from the Info pane. You can simply select a Rigid Body in Motive, open the Info pane, and access the Rigid Bodies tool from the to view respective real-time tracking data of the selected Rigid Body.
The Mask Visible feature in the the Calibration Pane, or in the 2D Camera Preview pane (), automatically detects all of the existing reflections present in the 2D view and masks over them. If desired, masks be manually created by drawing , selecting rectangular regions , or selecting circular regions in the image using the masking tools, or you can also subtract masks by toggling between additive/subtractive masking modes ( add or subtract).
Category | Description |
---|---|
Category | Description |
---|---|
Options | Description |
---|---|
Options | Descriptions |
---|---|
Row | Description |
---|---|
To create a MarkerSet, click the icon under the Assets pane and select New Marker Set.
Once a MarkerSet asset is created, its list of labels can be managed using the Markersets pane. First of all, markerset assets must be selected in Motive and the corresponding asset will be listed on the markerset pane. Then, new marker labels can be added by clicking the Icon. If you wish to create multiple marker labels at once, they can added by typing in the labels or copying and pasting a list of labels (a carriage-return delimited) from the windows clipboard onto the pane as shown in the image below..(Press Ctrl+V in the Marker List window).
Using the Labels pane, you can assign marker labels for each asset (Marker Set, Rigid Body, and Skeleton) via the QuickLabel Mode . The Labels pane also shows a list of labels involved in the Take and their corresponding percent completeness values. The percent completeness values indicate frame percentages of a Take for which the trajectory has been labeled. If the trajectory has no gaps (100% complete), no number will be shown. You can use this pane together with the Graph View pane to quickly locate gaps in a trajectory.
In the Perspective View pane. Assign the selected label to a marker. If the Increment Option () is set under the Labels pane, the label selection in the Labels pane will automatically advance each time you assign them.
Show/Hide skeleton visibility in the perspective view to have a better view on the markers when assigning marker labels.
Toggle skeleton selectability in the perspective view to use the skeleton as a visual aid without it getting in the way of marker data.
Show/Hide skeleton sticks and marker colors under the visual aids in the perspective view options for intuitive identification of labeled markers as you tag through skeleton markers.
In the Labeling pane, disable the Increment Label Selection option, and select a marker set and a label that is frequently occluded.
In the Labeling pane, disable the Apply Labels to Previous Frames option, and leave only the Apply Labels to Upcoming Frames option enabled.
Motive offers multiple options to stream tracking data onto external applications in real-time. Streaming plugins are available for Autodesk Motion Builder, The MotionMonitor, Visual3D, Unreal Engine 4, 3ds Max, Maya (VCS), VRPN, and trackd, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom clients to receive capture data. All of the listed streaming options do not require separate licenses to use. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. This page covers configuring Motive to broadcast frame data over a selected server network. Detailed instructions on specific are included in the PDF documentation that ships with the respective plugins or SDK's.
Read through the page for explanations on each setting. NaturalPoint Data Streaming Forum: .
Open the in Motive
It is important to select the network adapter (interface, IP Address) for streaming data. Most Motive Host PCs will have multiple network adapters - one for the camera network and one (or more) for the local area network (LAN). Motive will only stream over the selected adapter (interface). Select the desired interface using the in Motive. The interface can be either over a local area network (LAN) or on the same machine (localhost, local loopback). If both server (Motive) and client application are running on the same machine, set the network interface to the local loopback address (127.0.0.1). When streaming over a LAN, select the IP address of the network adapter connected to the LAN. This will be the same address the Client application will use to connect to Motive.
See:
Motive (1.7+) uses a right-handed Y-up coordinate system. However, coordinate systems used in client applications may not always agree with the convention used in Motive. In this case, the coordinate system in streamed data needs to be modified to a compatible convention. For client applications with a different ground plane definition, Up Axis can be changed under Advanced Network Settings. For compatibility with left-handed coordinate systems, the simplest method is to rotate the capture volume 180 degrees on the Y axis when defining the ground plane during .
If desired, recording in Motive can control or be controlled by other remote applications via sending or receiving either or XML broadcast messages to or from a client application through the UDP communication protocol. This enables client applications to trigger Motive or vise versa. Using commands is recommended because they are not only more robust but they also offer additional control features.
Recording start and stop commands can also be transmitted via XML packets. When triggering via XML messages, the Remote Trigger setting under must be set to true. In order for Motive, or clients, to receive the packets, the XML messages must be sent via the triggering UDP port. The triggering port is designated as two increments (2+) of the defined Command Port (default: 1510), under the advanced network settings, which defaults to 1512. Lastly, the XML messages must exactly follow the appropriate syntax:
Value | Description |
---|
Value | Description |
---|
Protocol | Markers | Rigid Bodies | Skeletons | Description | Download |
---|
When using the Duo/Trio tracking bars, you can set the coordinate origin at desired location and orientation using a . Make sure the calibration square is oriented properly.
[Motive] Select the Calibration square markers from the
Launch the Motive Batch Processor. It can be launched from either the start menu, Motive install directory, or from the in Motive.
When running the reconstruction pipeline in the batch processor, the reconstruction settings must be loaded using the ImportMotiveProfile method. From Motive, export out the and make sure it includes the reconstruction settings. Then, import this user profile file into the Batch Processor script before running the reconstruction, or trajectorizer, pipeline so that proper settings can be used for reconstructing the 3D data. For more information, refer to the sample scripts located in the TakeManipulation folder.
is an implementation of the Python programming language that can use the .NET libraries and Python libraries. The batch processor can execute valid IronPython scripts in addition to C# scripts.
Audio capture within Motive, does not natively synchronize to video or motion capture data and is intended for reference audio only. If you require synchronization, please use an external device and software with timecode. See below for suggestions for .
Recorded audio files can be exported into WAV format. To export, right-click on a Take from the and select Export Audio option in the context menu.
For more information on synchronizing external devices, read through the page.
To switch between video types, simply right-click on one of the cameras from the pane and select the desired image processing mode under the video types.
You can check and/or switch video types of a selected camera from either the , . Also, you toggle the camera(s) between tracking mode and reference mode in the by clicking on the Mode button ( / ). If you want to use all of the cameras for tracking, make sure all of the cameras are in the Tracking mode.
Open the and and select one or more cameras listed. Once the selection is made, respective camera properties will be shown on the properties pane. Current video type will be shown in the Video Mode section and you can change it using the drop-down menu.
Cameras can also be set to record grayscale reference videos during capture. When using MJPEG mode, these videos are synchronized with other captured frames, and they are used to observe what goes on during recorded capture. To record the reference video, switch the camera into a MJPEG grayscale mode by toggling on the camera mode.
The Reference View pane can be accessed under the View tab → Reference Overlay or simply by clicking on one of the reference view icons from the main toolbar (). This pane is used specifically for monitoring reference images from either a live capture or a recorded capture. When reference cameras are viewed in this pane, captured assets are overlayed over the video, which is very useful for analyzing the events during the capture.
Camera settings can be configured under the . In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. For this reason, the camera lens must be focused on the tracking volume, and the settings should be configured so that the markers are clearly visible in each camera view. Thus, the camera settings, such as camera exposure and IR intensity values, must always be checked and optimized in each setup. The following sections highlight additional settings that are directly related to 3D reconstruction.
Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See page for more information.
The THR setting is located in the in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.
To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the .
The under application settings control the tracking quality in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main stages before getting reconstructed into 3D tracking. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.
Enable Marker Size under the visual aids () in the viewport to inspect which reflections are accepted, or omitted, by the size filter.
In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.
Enable Marker Circularity under the visual aids in the viewport to inspect which reflections are accepted, or omitted, by the circularity filter.
The and deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In precision mode, cameras send the pixel data that would have been used by object mode to Motive for processing. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about .
After the 2D camera filter has been applied, each of the 2D centroids captured by each camera forms a marker ray, which is basically a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume; from each calibrated camera. When a minimum required number of rays, as defined in the ) converge and intersect within the allowable maximum offset distance (defined by settings) trajectorization of a 3D marker occurs. Trajectorization is a process of using 2D data to calculate respective 3D marker trajectories in Motive.
Monitoring marker rays is an efficient way of inspecting reconstruction outcomes. The rays show up by default, but if not, they can be enabled for viewing under the visual aids options under the toolbar in . There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.
Motive processes markers rays with the camera to reconstruct respective markers, and the solver settings determine how 2D data gets trajectories and solved into 3D data for tracking the Rigid Bodies and/or Skeletons. The solver not only tracks from the marker rays but additionally utilizes pre-defined asset definitions to provide high-quality tracking. The default solver settings work for most tracking applications, and the users should not need to modify these settings. With that being said, some of the basic settings which can be modified are summarized below.
The Live Pipeline settings doesn't have to be modified for most tracking applications. There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the page or refer to the corresponding tooltips.
In the , Motive is Live processing the data from captured 2D frames to obtain 3D tracking data in real-time, and you can inspect and monitor the marker rays from the . Any changes to the Live Pipeline (Solver/Camera) settings under the will be reflected immediately in the Live mode.
Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once recording has been made. You can still, however, view and use the recorded 2D data to optimize the solver parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the .
In 2D Mode, Motive is reconstructing in real-time from recorded 2D data; using the reconstruction/solver settings that were configured in the at the time of recording; Settings are saved under the properties of the corresponding TAK file. Please note that reconstruction/solver settings from the get applied for post-processing, instead of the settings from the panel. When in 2D Mode while editing a TAK file, any changes to the reconstruction/solver settings under TAK properties will be reflected in how the 3D reconstructions are solved, in real-time.
Under the , click to access the menu options and check the 2D Mode option.
Performing post-processing reconstruction. To perform post-processing reconstruction, open the , select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.
Camera Filter Settings In Edit mode, 2D camera filters can still be modified from the tracking group properties in the . Modified filter settings will change which markers in the recorded 2D data gets processed through the Live Pipeline engine.
Solver/Reconstruction Settings When you perform post-processing reconstruction on a recorded Take(s), a new set of 3D data will be reconstructed from the filtered 2D camera data. In this step, the solver settings defined under corresponding Take properties in the will be used. Note that the reconstruction properties under the are for the Live capture systems only.
Reconstruct and Auto-label, will additionally apply the pipeline on the obtained 3D data and label any markers that associate with existing asset (Rigid Body or Skeleton) definitions. The auto-labeling pipeline will be explained more on the page.
Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the or in the . When nothing is selected, reconstruction will be applied to all frames.
Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the .
CS-100: Used to define a ground plane in a small, precise motion capture volumes.
|
Reconstructed 3D Marker Data
•
•
•
•
6 Degrees of Freedom Rigid Body Data
•
•
•
Skeleton Data
•
•
•
Overall Reprojection
Displays the overall resulting 2D and 3D reprojection error values from the calibration.
Worst Camera
Displays the highest 2D and 3D reprojection error value from the calibration.
Triangulation
Triangulation section displays calibration results on residual offset values. Smaller residual error means more precise reconstructions.
Recommended: Recommended maximum residual offset for point cloud reconstruction.
Residual Mean Error: Average residual error from the calibration.
Overall Wand Error
Displays a mean error value of the detected wand length throughout the wanding process.
Ray Length
Displays a suggested maximum tracking distance, or a ray length, for each camera.
Overall Result
Grades the quality of the calibration result.
Maximum Error (px)
Displays the maximum reprojection error from the calibration.
Minimum Error (px)
Displays the minimum reprojection error from the calibration.
Average Error (px)
Displays the average reprojection error from the calibration.
Wand Error (mm)
Displays a mean error value of the detected wand length throughout the wanding process.
Calculation Time
Displays the total calculation time.
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Set the unit in exported files.
Use Timecode
Includes timecode.
Export FBX Actors
Includes FBX Actors in the exported file. Actor is a type of asset used in animation applications (e.g. MotionBuilder) to display imported motions and connect to a character. In order to animate exported actors, associated markers will need to be exported as well.
Optical Marker Name Space
Overrides the default name spaces for the optical markers.
Marker Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Markers
Export each marker coordinates.
Unlabeled Markers
Includes unlabeled markers.
Calculated Marker Positions
Export asset's constraint marker positions as the optical marker data.
Interpolated Fingertips
Includes virtual reconstructions at the finger tips. Available only with Skeletons that support finger tracking.
Marker Nulls
Exports locations of each marker.
Export Skeleton Nulls
Can only be exported when solved data is recorded for exported Skeleton assets. Exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Rigid Body Nulls
Can only be exported when solved data is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Sets the unit for exported segment lengths.
Use Timecode
Includes timecode.
Export Skeletons
Export Skeleton nulls. Please note that the solved data must be recorded for Skeleton bone tracking data to be exported. It exports 6 Degree of Freedom data for every bone segment in selected Skeletons.
Skeleton Names
Names of Skeletons that will be exported into the FBX binary file.
Name Separator
Choose ":" or "_" for marker name separator. The name separator will be used to separate the asset name and the corresponding marker name when exporting the data (e.g. AssetName:MarkerLabel or AssetName_MarkerLabel). When exporting to Autodesk Motion Builder, use "_" as the separator.
Rigid Body Nulls
Can only be exported when solved data is recorded for exported Rigid Body assets. Exports 6 Degree of Freedom data for selected Rigid Bodies. Orientation axes are displayed on the geometrical center of each Rigid Body.
Rigid Body Names
Names of the Rigid Bodies to export into the FBX binary file as 6 DoF nulls.
Marker Nulls
Exports locations of each marker.
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Markers
Enabling this option includes X/Y/Z reconstructed 3D positions for each marker in exported CSV files.
Unlabeled Markers
Enabling this option includes tracking data of all of the unlabeled makers to the exported CSV file along with other labeled markers. If you just want to view the labeled marker data, you can turn off this export setting.
Quality Statistics
Adds a column of Mean Marker Error values for each rigid body position data.
Adds a column of Marker quality values after each rigid body marker data.
More details are provided in the below section.
Rigid Bodies
When this option is set to true, exported CSV file will contain 6 Degree of Freedom (6 DoF) data for each rigid body from the Take. 6 DoF data contain orientations (pitch, roll, and yaw in the chosen rotation type as well as 3D positions (x, y, z) of the rigid body center.
RigidBodyMarkers
Enabling this option includes 3D position data for each expected marker locations (not actual marker location) of rigid body assets. Compared to the positions of the reconstructed marker positions included within the Markers columns, the Rigid Body Markers show the solved positions of the markers as affected by the rigid body tracking but not affected by occlusions.
Bones
When this option is set to true, exported CSV files will include 6 DoF data for each bone segment of skeletons in exported Takes. 6 DoF data contain orientations (pitch, roll, and yaw) in the chosen rotation type, and also 3D positions (x,y,z) for the proximal joint center of the bone which is the pivot point of the bone.
BoneMarkers
Enabling this option will include 3D position data for each expected marker locations (not actual marker location) of bone segments in skeleton assets. Compared to the real marker positions included within the Markers column, the Bone Markers show the solved positions of the markers as affected by the skeleton tracking but not affected by occlusions.
Header information
Includes detailed information about capture data as a header in exported CSV files. Types of information included in the header section is listed in the following section.
Rotation Type
Rotation type determines whether Quaternion or Euler Angles is used for orientation convention in exported CSV files. For Euler rotation, right-handed coordinate system is used and all different orders (XYZ, XZY, YXZ, YZX, ZXY, ZYX) of elemental rotation is available. More specifically, the XYZ order indicates pitch is degree about the X axis, yaw is degree about the Y axis, and roll is degree about the Z axis.
Unit
Sets units for positional data in exported CSV files
Export Device Data
When set to True, separate CSV files for recorded device data will be exported. This includes force plate data and analog data from NI-DAQ devices. A CSV file will be exported for each device included in the Take.
Use World Coordinates
This option decides whether exported data will be based on world(global) or local coordinate systems.
Global: Defines the position and orientation in respect to the global coordinate system of the calibrated capture volume. The global coordinate system is the origin of the ground plane which was set with a calibration square during Calibration process.
Local: Defines the bone segment position and orientation in respect to the coordinate system of the parent segment. Note that the hip of the skeleton is always the top-most parent of the segment hierarchy. Local coordinate axes can be set to visible from Application Settings or display properties of assets in Data Pane. The Bone segment rotation values in the Local coordinate space can be used to roughly represent the joint angles; however, for precise analysis, joint angles should be computed through a biomechanical analysis software using the exported capture data (C3D).
1st row
General information about the Take and export settings. Included information are: format version of the CSV export, name of the TAK file, the captured frame rate, the export frame rate, capture start time, number of total frames, rotation type, length units, and coordinate space type.
2nd row
Empty
3rd row
Displays which data type is listed in each corresponding column. Data types include raw marker, Rigid Body, Rigid Body marker, bone, bone marker, or unlabeled marker. Read more about Marker Types.
4th row
Includes marker or asset labels for each corresponding data set.
5th row
Displays marker ID.
6th and 7th row
Includes header label on which data is included in the column: position and orientation on X/Y/Z.
Name | Name of the Take that will be recorded. |
SessionName | Name of the session folder. |
Notes | Informational note for describing the recorded Take. |
Description | (Reserved) |
Assets |
DatabasePath | The file directory where the recorded captures will be saved. |
Start Timecode |
PacketID | (Reserved) |
HostName | (Reserved) |
ProcessID | (Reserved) |
Name | Name of the recorded Take. |
Notes | Informational notes for describing recorded a Take. |
Assets |
Timecode |
HostName | (Reserved) |
ProcessID | (Reserved) |
NatNet SDK | Y | Y | Y | Runs local or over network. The NatNet SDK includes multiple sample applications for C/C++, OpenGL, Winforms/.NET/C#, MATLAB, and Unity. It also includes a C/C++ sample showing how to decode Motive UDP packets directly without the use of client libraries (for cross platform clients such as Linux). C/C++ or VB/C#/.NET or Matlab |
Autodesk MotionBuilder Plugin | Y | Y | Y | Runs local or over network. Allows streaming both recorded data and real-time capture data for markers, rigid bodies, and skeletons. Comes with Motion Builder Resources: OptiTrack Optical Device OptiTrack Skeleton Device OptiTrack Insight VCS |
Visual3D | Y | N | N | With a Visual3D license, you can download Visual3D server application which is used to connect OptiTrack server to Visual3D application. Using the plugin, Visual 3D receives streamed marker data to solve precise skeleton models for biomechanics applications. |
The MotionMonitor | Y | N | N | The MotionMonitor is cable of receiving live streamed motion capture data from Motive. Streamed data is then solved, in real-time, using live marker data. |
Unreal Engine 4 Plugin | N | Y | N |
Unity Plugin | N | Y | N |
3ds Max Plugin | N | Y | N | (Unmaintained)Runs local or over network. Supports 3ds Max 2009-2012. This plugin allows Autodesk 3ds Max to receive skeletons and rigid bodies from the OptiTrack server application such as Motive. |
VCS:Maya | N | Y | N | Separate license is required. Streams capture data into Autodesk Maya for using the Virtual Camera System. |
File |
Open File (TTP, CAL, TAK, TRA, SKL) | CTRL + O |
Save Current Take | CTRL + S |
Save Current Take As | CTRL + Shift + S |
Export Tracking Data from current (or selected) TAKs | CTRL + Shift + Alt + S |
Basic |
Toggle Between Live/Edit Mode | ~ |
Record Start / Playback start | Space Bar |
Select All | CTRL + A |
Undo | Ctrl + Z |
Redo | Ctrl + Y |
Cut | Ctrl + X |
Paste | Ctrl + V |
Layout |
Calibrate Layout | Ctrl+1 |
Create Layout | Ctrl+2 |
Capture Layout | Ctrl+3 |
Edit Layout | Ctrl+4 |
Custom Layout [1...] | Ctrl+[5...9], Shift[1...9] |
Perspective View Pane (3D) |
Follow Selected | G |
Zoom to Fit Selection | F |
Zoom to Fit All | Shift + F |
Reset Tracking | Crtl+R |
" |
Shift + " |
Jog Timeline | Alt + Left Click |
Create Rigid Body From Selected | Ctrl+T |
Refresh Skeleton Asset | Ctrl + R with a skeleton asset selected |
T |
Toggle Labeling Mode | D |
Select Mode | Q |
Translation Mode | W |
Rotation Mode | E |
Scale Mode | R |
Camera Preview (2D) |
Video Modes |
|
Data Management Pane |
Remove or Delete Session Folders | Delete |
Remove Selected Take | Delete |
paste shots as empty take from clipboard | Ctrl+V |
Timeline / Graph View |
Toggle Live/Edit Mode | ~ |
Again+ | + |
Live Mode: Record | Space |
Edit Mode: Start/stop playback | Space |
Rewind (Jump to the first frame) | Ctrl + Shift + Left Arrow |
PageTimeBackward (Ten Frames) | Down Arrow |
StepTimeBackward (One Frame) | Left Arrow |
StepTimeForward (One Frame) | Right Arrow |
PageTimeForward (Ten Frames) | Up Arrow |
FastForward (Jump to the last frame) | Ctrl + Shift + Right Arrow |
To next gapped frames | Z |
To previous gapped frames | Shift + Z |
Graph View - Delete Selected Keys in 3D data | Delete when frame range is selected |
Show All | Shift + F |
Frame To Selected | F |
Zoom to Fit All | Shift + F |
Editing / Labeling Workflow |
Apply smoothing to selected trajectory | X |
Apply cubic fit to the gapped trajectory | C |
Toggle Labeling Mode | D |
To next gapped frame | Z |
To previous gapped frame | Shift + Z |
Enable/Disable Asset Editing | T |
Select Mode | Q |
Translation Mode | W |
Rotation Mode | E |
Scale Mode | R |
Delete selected key | DELETE |
Frame Rate
Number of samples included per every second of exported data.
Start Frame
Start frame of the exported data. You can either set it to the recorded first frame of the exported Take or to the start of the working range, or scope range, as configured under the Control Deck or in the Graph View pane.
End Frame
End frame of the exported data. You can either set it to the recorded end frame of the exported Take or to the end of the working range, or scope range, as configured under the Control Deck of in the Graph View pane.
Scale
Apply scaling to the exported tracking data.
Units
Sets the length units to use for exported data.
Axis Convention
Sets the axis convention on exported data. This can be set to a custom convention, or preset convetions for exporting to Motion Builder or Visual3D/Motion Monitor.
X Axis Y Axis Z Axis
Allows customization of the axis convention in the exported file by determining which positional data to be included in the corresponding data set.
Single Joint Torso
When this is set to true, there will be only one skeleton segment for the torso. When set to false, there will be extra joints on the torso, above the hip segment.
Hands Downward
Sets the exported skeleton base pose to use hands facing downward.
MotionBuilder Names
Sets the name of each skeletal segment according to the bone naming convention used in MotionBuilder.
Skeleton Names
Set this to the name of the skeleton to be exported.
List of involved in the Take.
Timecode values (SMTPE) for frame alignments, or reserving future record trigger events for timecode supported systems. Camera systems usually have higher framerates compared to the SMPTE Timecode. In the triggering packets, the always equal to 0 at the trigger.
List of involved in the Take
Timecode values (SMPTE) for frame alignments. The value is zero.
C-Motion wiki:
Runs local or over network. Supports Unreal Engine versions up to 4.17. This plugin allows streaming of rigid bodies and integration of HMD tracking within Unreal Engine projects. For more details, read through the documentation page.
Runs local or over network. This plugin allows streaming of tracking data and integration of HMD tracking within Unity projects. For more details, read through the documentation page.
View/hide
View/hide
Enable/Disable