Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page is an introduction showing how to use OptiTrack cameras to set up an LED Wall for Virtual Production. This process is also called In-Camera Virtual Effects or InCam VFX. This is an industry technique used to simulate the background of a film set to make it seem as if the actor is in another location.
This tutorial requires Motive 2.3.x, Unreal Engine 4.27, and the Unreal Engine: OptiTrack Live Link Plugin.
This is a list of required hardware and what each portion is used for.
The OptiTrack system is used to track the camera, calibration checkerboard, (optional) LED Wall, and (optional) any other props or additional cameras. As far as OptiTrack hardware is concerned, you will need all of the typical hardware for a motion capture system plus an eSync2, BaseStation, CinePuck, Probe, and a few extra markers. Please refer to the Quick Start Guide for instructions on how to do this.
You will need one computer to drive Motive/OptiTrack and another to drive the Unreal Engine System.
Motive PC - The CPU is the most important component and should use the latest generation of processors.
Unreal Engine PC - Both the CPU and GPU is important. However, the GPU in particular needs to be top of the line to render the scene, for example a RTX 3080 Ti. Setups that involve multiple LED walls stitched together will require graphics cards that can synchronize with each other such as the NVIDIA A6000.
The Unreal Engine computer will also require an SDI input card with both SDI and genlock support. We used the BlackMagic Decklink SDI 4K and the BlackMagic Decklink 8K Pro in our testing, but other cards will work as well.
You will need a studio video camera with SDI out, timecode in, and genlock in support. Any studio camera with these BNC ports will work, and there are a lot of different options for different budgets. Here are some suggestions:
Sony PXW-FS7 (What we use internally)
Etc...
Cameras without these synchronization features can be used, but may look like they are stuttering due to frames not perfectly aligning.
A camera dolly or other type of mounting system will be needed to move and adjust the camera around your space, so that the movement looks smooth.
Your studio camera should have a cage around it in order to mount objects to the outside of it. You will need to rigidly mount your CinePuck to the outside. We used SmallRig NATO Rail and Clamps for the cage and Rigid Body mounting fixtures.
You’ll also need a variety of cables to connect from camera back to where the Computers are located. This includes things such as power cables, BNC cables, USB extension cables (optional for powering the CinePuck), etc... These will not all be listed here, since they will depend on the particular setup for your system.
Many systems will want a lens encoder in the mix. This is only necessary if you plan on zooming your lens in/out between shoots. We do not use this device in this example for simplicity.
In order to run your LED wall, you will need two things an LED Wall and a Video Processor.
For large walls composed of LED wall subsections you will need an additional video processor and an additional render PC for each wall as well as an SDI splitter. We are using a single LED wall for simplicity.
The LED Wall portion contains the grid of LED light, the power structure, and ways to connect the panels into a video controller, but does not contain the ability to send an HDMI signal to the wall.
We used Planar TVF 125 for our video wall, but there are many other options out there depending on your needs.
The video processor is responsible for taking an HDMI/Display Port/SDI signal and rendering it on the LED wall. It's also responsible for synchronizing the refresh rate of the LED wall with external sources.
The video processor we used for controlling the LED wall was the Color Light Z6. However, Brompton Technology video processors are a more typical film standard.
You will either need a timecode generator AND a genlock generator or a device that does both. Without these devices the exposure of your camera will not align with when the LED wall renders and you may see the LED wall rendering. These signals are used to synchronize Motive, the cinema camera, LED Walls, and any other devices together.
Timecode - The timecode signal should be fed into Motive and the Cinema camera. The SDI signal from the camera will plug into the SDI card, which will carry the timecode to the Unreal Engine computer as well.
Genlock - The genlock should be fed into Motive, the cinema camera, and the Video Processor(s).
Timecode is for frame alignment. It allows you to synchronize data in post by aligning the timecode values together. (However, it does not guarantee that the cameras expose and the LED wall renders at the same time). There are a variety of different manufactures that will work for timecode generators. Here are some suggestions:
Etc...
Genlock is for frame synchronization. It allows you to synchronize data in real-time by aligning the times when a camera exposes or an LED Wall renders its image. (However, it does not align frame numbers, so one system could be on frame 1 and another on frame 23.) There are a variety of different manufactures that will work for genlock generators. Here are some suggestions:
Etc...
Below is a diagram that shows what devices are connected to each other. Both Genlock and Timecode are connected via BNC ports on each device.
Plug the Genlock Generator into:
eSync2's Genlock-In BNC port
Any of the Video Processor's BNC ports
Studio Video Camera's Genlock port
Plug the TimeCode Generator into:
eSync2's Timecode-In BNC port
Studio Video Camera's TC IN BNC port
Plug the Studio Video Camera into:
Unreal Engine PC SDI IN port for Genlock via the SDI OUT port on the Studio Video Camera
Unreal Engine PC SDI IN port for Timecode via the SDI OUT port on the Studio Video Camera
A rigid board with a black and white checkerboard on it is needed to calibrate the lens characteristics. This object will likely be replaced in the future.
There are a lot of hardware devices required, so below is a rough list of required hardware as a checklist.
Truss or other mounting structure
Prime/PrimeX Cameras
Ethernet Cables
Network Switches
Calibration Wand
Calibration Square
Motive License
License Dongle
Computer (for Motive)
Network Card for the Computer
CinePuck
BaseStation (for CinePuck)
eSync2
BNC Cables (for eSync2)
Timecode Generator
Genlock Generator
Probe (optional)
Extra markers or trackable objects (optional)
Cinema/Broadcast Camera
Camera Lens
Camera Movement Device (ex. dolly, camera rails, etc...)
Camera Cage
Camera power cables
BNC Cables (for timecode, SDI, and Genlock)
USB C extension cable for powering the CinePuck (optional)
Lens Encoder (optional)
Truss or mounting system for the LED Wall
LED Wall
Video Processor
Cables to connect between the LED Wall and Video Processor
HDMI or other video cables to connect to Unreal PC
Computer (for Unreal Engine)
SDI Card for Cinema Camera input
Video splitters (optional)
Video recorder (for recording the camera's image)
Checkerboard for Unreal calibration process
Non-LED Wall based lighting (optional)
Next, we'll cover how to configure Motive for tracking.
We assume that you have already set up and calibrated Motive before starting this video. If you need help getting started with Motive, then please refer to our Getting Started wiki page.
After calibrating Motive, you'll want to set up your active hardware. This requires a BaseStation and a CinePuck.
Plug the BaseStation into a Power over Ethernet (PoE) switch just like any other camera.
CinePuck
Firmly attach the CinePuck to your Studio Camera using your SmallRig NATO Rail and Clamps on the cage of the camera.
The CinePuck can be mounted anywhere on the camera, but for best results put the puck closer to the lens.
Turn on your CinePuck, and let it calibrate the IMU bias by waiting until the flashing red and orange lights turn into flashing green lights.
It is recommended to power the CinePuck using a USB connection for the duration of filming a scene to avoid running out of battery power; a light should turn on the CinePuck when the power is connected.
Change the tracking mode to Active + Passive.
Create a Rigid Body out of the CinePuck markers.
For active markers, turning up the residual will usually improve tracking.
Go through a refinement process in the Builder pane to get the highest quality Rigid Body.
Show advanced settings for that Rigid Body, then input the Active Tag ID and Active RF (radio frequency) Channel for your CinePuck.
If you don’t have this information, then consult the IMU tag instructions found here Active Marker Tracking: IMU Setup .
If you input the IMU properties incorrectly or it is not successfully connecting to the BaseStation, then your Rigid Body will turn red. If you input the IMU properties correctly and it successfully connects to the BaseStation, then it will turn orange and need to go through a calibration process. Please refer to the table below for more detailed information.
You will need to move the Rigid Body around in each axis until it turns back to the original color. At this point you are tracking with both the optical marker data and the IMU data through a process called sensor fusion. This takes the best aspects of both the optical motion capture data and the IMU data to make a tracking solution better than when using either individually. As an option, you may now turn the minimum markers for your Rigid Body down to 1 or even 0 for difficult tracking situations.
Viewport
When the color of Rigid Body is the same as the assigned Rigid Body color, it indicates Motive is connected to the IMU and receiving data.
If the color is orange, it indicate the IMU is attempting to calibrate. Slowly rotate the object until the IMU finishes calibrating.
If the color is red, it indicates the Rigid Body is configured for receiving IMU data, but no data is coming through the designated RF channel. Make sure Active Tag ID and RF channel values mat the configuration on the active Tag/Puck.
Description
After Motive is configured, we'll need to setup the LED Wall and Calibration Board as trackable objects. This is not strictly necessary for the LED Wall, but will make setup easier later and make setting the ground plane correctly unimportant.
Before configuring the LED Wall and Calibration Board, you'll first want to create a probe Rigid Body. The probe can be used to measure locations in the volume using the calibrated position of the metal tip. For more information for using the probe measurement tool, please feel free to visit our wiki page Measurement Probe Kit Guide.\
Place four to six markers on the LED Wall without covering the LEDs on the Wall.
Use the probe to sample the corners of the LED Wall.
You will need to make a simple plane geometry that is the size of your LED wall using your favorite 3D editing tool such as Blender or Maya. (A sample plane comes with the Unreal Engine Live Link plugin if you need a starting place.)
If the plane does not perfectly align with the probe points, then you will need to use the gizmo tool to align the geometry. If you need help setting up or using the Gizmo tool please visit our other wiki page Gizmo Tool: Translate, Rotate, and Scale Gizmo.
Any changes you make to the geometry will need to be on the Rigid Body position and not the geometry offset.
You can make these adjustments using the Builder pane, then zeroing the Attach Geometry offsets in the Properties pane.
Place four to six markers without covering the checkered pattern.
Use probe to sample the bottom left vertex of the grid.
Use the gizmo tool to orient the Rigid Body pivot and place pivot in the sampled location.
Next, you'll need to make sure that your eSync is configured correctly.
If not already done, plug your genlock and timecode signals into the appropriately labeled eSync input ports.
Select the eSync in the Devices pane.
In the Properties pane, check to see that your timecode and genlock signals are coming in correctly at the bottom.
Then, set the Source to Video Genlock In, and set the Input Multiplier to a value of 4 if your genlock is at 30 Hz or 5 if your genlock is at a rate of roughly 24 Hz.
Your cameras should stop tracking for a few seconds, then the rate in the Devices pane should update if you are configured correctly.
Make sure to turn on Streaming in Motive, then you are all done with the Motive setup.
Start Unreal Engine and choose the default project under the “Film, Television, and Live Events” section called “InCamera VFX”
Before we get started verify that the following plugins are enabled:
Camera Calibration (Epic Games, Inc.)
OpenCV Lens Distortion (Epic Games, Inc.)
OptiTrack - LiveLink (OptiTrack)
Media Player Plugin for your capture card (For example, Blackmagic Media Player)
Media Foundation Media Player
WMF Media Player
Many of these will be already enabled.
The main setup process consists of four general steps:
1. Setup the video media data.
2. Setup FIZ and Live Link Sources
3. Track and calibrate the camera in Unreal Engine
4. Setup nDisplay
Right click in the Content Browser Panel > Media > Media Bundle and name the Media Bundle something appropriate.
Double click the Media Bundle you just created to open the properties for that object.
Set the Media Source to the Blackmagic Media Source, the Configuration to the resolution and frame rate of the camera, and set the Timecode Format to LTC (Linear Timecode).
Drag this Media Bundle object into the scene and you’ll see your video appear on a plane.
You’ll also need to create two other video sources doing roughly the same steps as above.
Right click in the Content Browser Panel > Media > Blackmagic Media Source.
Open it, then set the configuration and timecode options.
Right click in the Content Browser Panel > Media > Media Profile.
Click Configure Now, then Configure.
Under Media Sources set one of the sources to Blackmagic Media Source, then set the correct configuration and timecode properties.
Before we set up timecode and genlock, it’s best to have a few visual metrics visible to validate that things are working.
In the Viewport click the triangle dropdown > Show FPS and also click the triangle dropdown > Stat > Engine > Timecode.
This will show timecode and genlock metrics in the 3D view.
If not already open you’ll probably want the Window > Developer Tools > Timecode Provider and Window > Developer Tools > Genlock panels open for debugging.
You should notice that your timecode and genlock is noticeably incorrect which will be corrected in later steps below.
The timecode will probably just be the current time.
To create a timecode blueprint, right click in the Content Browser Panel > Blueprint > BlackmagicTimecodeProvider and name the blueprint something like “BM_Timecode”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Timecode > Timecode Provider = “BM_Timecode”.
At this point your timecode metrics should look correct.
Right click in the Content Browser Panel > Blueprint > BlackmagicCustomTimeStep and name the blueprint something like “BM_Genlock”.
The settings for this should match what you did for the Video Data Source.
Set the Project Settings > Engine General Settings > Framerate > Custom TimeStep = “BM_Genlock”.
Your genlock pane should be reporting correctly, and the FPS should be roughly your genlock rate.
Debugging Note: Sometimes you may need to close then restart the MediaBundle in your scene to get the video image to work.
Shortcut: There is a shortcut for setting up the basic Focus Iris Zoom file and the basic lens file. In the Content Browser pane you can click View Option and Show Plugin Content, navigate to the OptiTrackLiveLink folder, then copy the contents of this folder into your main content folder. Doing this will save you a lot of steps, but we will cover how to make these files manually as well.
We need to make a blueprint responsible for controlling our lens data.
Right click the Content Browser > Live Link > Blueprint Virtual Subject, then select the LiveLinkCameraRole in the dropdown.
Name this file something like “FIZ_Data”.
Open the blueprint. Create two new objects called Update Virtual Subject Static Data and Update Virtual Subject Frame Data.
Connect the Static Data one to Event on Initialize and the Frame Data one to Event on Update.
Right click on the blue Static Data and Frame Data pins and Split Struct Pin.
In the Update Virtual Subject Static Data object:
Disable Location Supported and Rotation Support, then Enable the Focus Distance Supported, Aperture Supported, and Focal Length Supported options.
Create three new float variables called Zoom, Iris, and Focus.
Drag them into the Event Graph and select Get to allow those variables to be accessed in the blueprint.
Connect Zoom to Frame Data Focal Length, connect Iris to Frame Data Aperture, and connect Focus to Frame Data Focus Distance.
Compile your blueprint.
Select your variables and set the default value to the lens characteristics you will be using.
For our setup we had used:
Zoom is 20 mm, Iris is f/2.8 , and the Focus is 260 cm.
Compile and save your FIZ blueprint.
Both Focus and Iris graphs should create an elongated "S" shape based on the two data points provided for each above.
To create a lens file right click the Content Browser > Miscellaneous > Lens File, then name the file appropriately.
Double click the lens file to open it.
Switch to the Lens File Panel.
Click the Focus parameter.
Right click in the graph area and choose Add Data Point, click Input Focus and enter 10, then enter 10 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 1000 and 1000.
Click the Iris parameter.
Right click in the graph area and choose Add Data Point.
Click Input Iris and enter 1.4, then enter 1.4 for the Encoder mapping.
Repeat the above step to create a second data point, but with values of 22 and 22.
Save your lens file.
The above process is to set up the valid ranges for our lens focus and iris data. If you use a lens encoder, then this data will be controlled by the input from that device.
In the Window > Live Link pane, click the + Source icon, then Add Virtual Subject.
Choose the FIZ_Data object that we created above in the FIZ Data section of this OptiTrack Wiki page and add it.
Click the + Source icon, navigate to the OptiTrack source, and click Create.
Click Presets and create a new preset.
Edit > Project Settings and Search for Live Link and set the preset that you just created as the Default Live Link Preset.
You may want to restart your project at this point to verify that the live link pane auto-populates on startup correctly. Sometimes you need to set this preset twice to get it to work.
From the Place Actors window create an Empty Actor this will act as a camera parent.
Add it to the nDisplay_InCamVFX_Config object.
Create another actor object and make it a child of the camera parent actor.
Zero out the location of the camera parent actor from the Details pane under Transform.
For out setup, in the image to the right, we have labeled the empty actor “Cine_Parent” and its child object “CineCameraActor1” .
Select the default “CineCameraActor1” object in the World Outliner pane.
In the Details pane there should be a total of two LiveLinkComponentControllers.
You can add a new one by using the + Add Component button.
For our setup we have labeled one live link controller “Lens” and the other “OptiTrack”.
Click Subject Representation and choose the Rigid Body associated with your camera.
Click Subject Representation and choose the virtual camera. Then go to “Lens” Live Link Controller then navigate to Role Controllers > Camera Role > Camera Calibration > Lens File Picker and select the lens file you created. This process allows your camera to be tracked and associates the lens data with the camera you will be using.
Select the Place Actors window to create an Empty Actor and add it to the nDisplay_InCamVFX_Config object.
Zero out the location of this actor.
In our setup we have named out Empty Actor "Checkerboard_Parent"
From the Place Actors window also create a “Camera Calibration Checkerboard” actor for validating our camera lens information later.
Make it a child of the “Checkerboard” actor from before.
Configure the Num Corner Row and Num Corner Cols.
These values should be one less than the number of black/white squares on your calibration board. For example, if your calibration board has 9 rows of alternating black and white squares and 13 columns across of black and white squares, you would input 8 in the Num Corner Row field and 12 in the Num Corner Cols field.
Also input the Square Side Length which is the measurement of a single square (black or white).
Set the Odd Cube Materials and Even Cube Materials to solid colors to make it more visible.
Select "Checkerboard_Parent" and + Add Component of a Live Link Controller.
Add the checkerboard Rigid Body from Motive as the Subject Representation.
At this point your checkerboard should be tracking in Unreal Engine.
Double click the "Lens" file from earlier and go to the Calibration Steps tab and the Lens Information section.
On the right, select your Media Source.
Set the Lens Model Name and Serial Number to some relevant values based on what physical lens you are using for your camera.
The Sensor Dimensions is the trickiest portion to get correct here.
This is the physical size of the image sensor on your camera in millimeters.
You will need to consult the documentation for your particular camera to find this information.
For example, the Sony FS7 is 1920x1080 which we'd input X = 22.78 mm and Y = 12.817 mm for the Sensor Dimensions.
The lens information will calculate the intrinsic values of the lens you are using.
Choose the Lens Distortion Checkerboard algorithm and choose the checkerboard object you created above.
The Transparency slider can be adjusted between showing the camera image, 3D scene, or a mix of both. Show at least some of the raw camera image for this step.
Place the checkerboard in the view of the camera, then click in the 2D view to take a sample of the calibration board.
You will want to give the algorithm a variety of samples mostly around the edge of the image.
You will also want to get some samples of the calibration board at two different distances. One closer to the camera and one closer to where you will be capturing video.
Taking samples can be a bit of an art form.
You will want somewhere around 15 samples.
Once you are done click Add to Lens Distortion Calibration.
With an OptiTrack system you are looking for a RMS Reprojection Error of around 0.1 at the end. Slightly higher values can be acceptable as well, but will be less accurate.
The Nodal Offset tab will calculate the extrinsics or the position of the camera relative to the OptiTrack Rigid Body.
Select the Nodal Offset Checkerboard algorithm and your checkerboard from above.
Take samples similar to the Lens Distortion section.
You will want somewhere around 5 samples.
Click Apply to Camera Parent.
This will modify the position of the “Cine_Parent" actor created above.
Set the Transparency to 0.5.
This will allow you to see both the direct feed from the camera and the 3D overlay at the same time. As long as your calibration board is correctly set up in the 3D scene, then you can verify that the 3D object perfectly overlays on the 2D studio camera image.
In the World Outliner, Right click the Edit nDisplay_InCameraVFX_Config button. This will load the controls for configuring nDisplay.
For larger setups, you will configure a display per section of the LED wall. For smaller setups, you can delete additional sections (VP_1, VP_2, and VP_3) accordingly from the 3D view and the Cluster pane.
For a single display:
Select VP_0 and in the Details pane set the Region > W and H properties to the resolution of your LED display.
Do the same for Node_0 (Master).
Select VP_0 and load the plane mesh we created to display the LED wall in Motive.
An example file for the plane mesh can be found in the Contents folder of the OptiTrack Live Link Plugin. This file defines the physical dimensions of the LED wall.
Select the "ICVFXCamera" actor, then choose your camera object under In-Camera VFX > Cine Camera Actor.
Compile and save this blueprint.
Click Export to save out the nDisplay configuration file. (This file is what you will be asked for in the future in an application called Switchboard, so save it somewhere easy to find.)
Go back to your main Unreal Engine window and click on the nDisplay object.
Click + Add Component and add a Live Link Controller.
Set the Subject Representation to the Rigid Body for your LED Wall in Motive and set the Component to Control to “SM_Screen_0”.
At this point your LED Wall should be tracked in the scene, but none of the rendering will look correct yet.
To validate that this was all setup correctly you can turn off Evaluate Live Link for your CineCamera and move it so that it is in front of the nDisplay LED Wall.
Make sure to re-enable Evaluate Live Link afterwards.
The next step would be to add whatever reference scene you want to use for your LED Wall Virtual Production shoot. For example, we just duplicated a few of the color calibrators (see image to the right) included with the sample project, so that we have some objects to visualize in the scene.
If you haven’t already you will need to go to File > Save All at this point. Ideally, you should save frequently during the whole process to make sure you don’t lose your data.
Click the double arrows above the 3D Viewport >> and choose Switchboard > Launch Switchboard Listener. This launches an application that listens for a signal from Switchboard to start your experience.
Click the double arrows above the 3D Viewport >> and choose Launch Switchboard.
If this is your first time doing this, then there will be a small installer that runs in the command window.
A popup window will appear.
Click the Browse button next to the uProject option and navigate to your project file (.uproject).
Then click Ok and the Switchboard application will launch.
In Switchboard click Add Device, choose nDisplay, click Browse and choose the nDisplay configuration file (.ndisplay) that you created previously.
In Settings, verify that the correct project, directories and nDisplay are being referenced.
Click the power plug icon to Connect all devices.
Make sure to save and close your Unreal Engine project.
Click the up arrow button to Start All Connected Devices.
The image on the LED wall should look different when you point the camera at it, since it is calculating for the distortion and position of the lens. From the view of the camera it should almost look like you are looking through a window where the LED wall is located.
You might notice that the edge of the camera’s view is a hard edge. You can fix this and expand the field of view slightly to account for small amounts of lag by going back to your Unreal Engine project into the nDisplay object.
Select the "ICVFXCamera" object in the Components pane.
In the Details pane set the Field of View Multiplier to a value of about 1.2 to account for any latency, then set the Soft Edge > Top and Bottom and Sides properties to around .25 to blur the edges.
From an outside perspective, the final product will look like a static image that updates based on where the camera is pointing. From the view of the cameras, it will essentially look like you are looking through a window to a different world.
In our example, we are just tracking a few simple objects. In real productions you’ll use high quality 3D assets and place objects in front of the LED wall that fit with the scene behind to create a more immersive experience, like seen in the image to the right. With large LED walls, the walls themselves provide the natural lighting needed to make the scene look realistic. With everything set up correctly, what you can do is only limited by your budget and imagination.
Motive 3.0.2 Update:
Following the Motive 3.0.2 release, an internet connection is no longer required for initial use of Motive. If you are currently using Motive 3.0.1 or older, please install this new release from our Software webpage. Please note that an internet connection is still required to download Motive.exe from the OptiTrack website.
Important License Update:
New licensing system in Motive 3. Please check the OptiTrack website for details on Motive licenses.
Security Key (Motive 3.x): Starting from version 3.0, USB Security Key will be required to use Motive. USB Hardware Keys that were used for activating older versions of Motive will no longer work with 3.0, and they will need to be replaced with the USB Security key. For any questions, please contact us.
Hardware Key (Motive 2.x or below): Motive 2.x versions will still follow the same system and will require USB Hardware Key
USB Cameras
USB cameras, including Flex series, tracking bars, and Slim3U, cameras are not supported in 3.x versions currently. For USB camera systems, please use Motive 2.x versions. Go to Motive 2.3 documentation.
For More Information:
Visit our website for more information on the new versions:
What's New: https://www.optitrack.com/software/motive/
Changelog and Download link: https://www.optitrack.com/support/downloads/
This page includes all of the Motive tutorial video for visual learners.
Updated videos coming soon!
Below is a quick start guide for most Prime Color and Prime Color FS setups. This setup and settings optimize the Prime Color Camera systems and are strongly recommended for best performance. Please see our full Prime Color Camera pages for more in-depth information on each topic.
Windows 10 or 11 Professional (64 Bit)
Designated 1Gbps NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 1Gbps uplink port
RAM: 16GB+ of memory
GPU: GTX 1050 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
Windows 10 or 11 Professional (64 Bit) Windows IoT (Contact Support)
Designated 10Gbps+ NIC w/drivers
CPU: Intel i9 or better 3.5GHz+
Network switch with 10Gbps+ uplink port
RAM: 32GB+ of memory
GPU: RTX 2070 or better with latest drivers and supports OpenGL version 4.0 or higher.
M.2 SSD
If you experience latency or camera drops, you may need to increase the specifications on certain components, especially if your setup includes larger Prime Color camera counts. Please reach out to our Support team, if you are experiencing any of these issues even after upgrading the following specifications above and setup below.
Each Prime Color camera must be uplinked and powered through a standard PoE connection that can provide at least 15.4 watts to each port simultaneously.
Please note that if your aggregation switch is PoE, you can plug your Prime Color Cameras directly into the aggregation switch. PoE injectors are optional and will only be required if your aggregation switch is not PoE.
Prime Color cameras connect to the camera system just like other Prime series camera models. Simply plug the camera onto a PoE switch that has enough available bandwidth and it will be powered and synchronized along with other tracking cameras. When you have two color cameras, they will need to be distributed evenly onto different PoE switches so that the data load is balanced out.
For 1-2 Prime Color Cameras it is recommended to use 1Gbps network switch with 1Gbps uplink port and a 1Gpbs NIC or higher. For 3+ Prime Color Cameras it is required to use network switches with a 10Gbps uplink port in conjunction with a 10Gbps designated NIC and their appropriate drivers.
NIC drivers may need to be installed via disc or downloaded from the manufacture's support website. If you're unsure of where to find these drivers or how to install them, please reach out to our Support team.
When using multiple Prime Color cameras, we recommend connecting the color cameras directly into the 10-gigabit aggregation (uplink) switch, because such setup is best for preventing bandwidth bottleneck. A PoE injector will be required if the uplink switch does not provide PoE. This allows the data to travel directly onto the uplink switch and to the host computer through the 10-gigabit network interface. This will also separate the color cameras from the tracking cameras.
You'll want to remove as much bloatware from your PC in order to optimize your system and make sure minimal unnecessary background processes are running. Background process can take up valuable CPU resources from Motive and cause frame drops while running your camera system.
There are many external resources in order to remove unused apps and halt unnecessary background processes, so they will not be covered within the scope of this page.
As a general rule for all OptiTrack camera systems, you'll want to disable all Windows firewalls and either disable or remove any Antivirus software. If firewalls and Antivirus software is enabled, this will cause frame drops while running your camera system.
In order for Motive to run above other processes, you'll need to change the Priority of Motive.exe to High.
Right Click on the Motive shortcut from your Desktop
In the Target: text field enter the below path, this will allow Motive to run at High Priority that will persist from closing and reopening Motive.
C:\Windows\System32\cmd.exe /C start "" /high "C:\Program Files\OptiTrack\Motive\Motive.exe"
Please refrain from setting the priority to Realtime. If Realtime is selected, this can cause loss of input control (mouse, keyboard, etc.) since Windows can prioritize Motive above input processes.
If you're running a system with a CPU with a lower core count, you may need to disable Motive from running on a couple of cores. This will help stabilize the overall system and free up some cores for other Windows required processes.
From the Task Manager, navigate to the Details tab and right click on Motive.exe
Select Set Affinity
From this window, uncheck the cores you wish to disallow Motive.exe to run on.
Click OK
Please note that you should only ever disable 2 cores or less to insure Motive still runs smoothly.
We recommend that you start with only one core and work your way up to two if you're still experiencing frame drop issues with your camera system.
Windows IoT is a stripped down version of Windows OS. This can offer many benefits in terms of running a smooth system with very little 'extras' that come standard with more commercial versions of Windows. Windows IoT can aid further in terms of Prime Color Camera system performance.
If you're still experiencing issues with dropped frames even after altering the settings above, please reach out to our Support team for more information regarding Windows IoT.
In most cases your switch settings will not be required to be altered. However, if your switch has built in Storm Control, you'll want to disable this feature.
Your Network Interface Card has a few settings that you'll need to change in order to optimize your system and reduce issues when capturing Prime Color Camera video.
To navigate to the camera network's NIC:
Open Windows Settings
Select Ethernet from the navigation sidebar
Under Related settings select Change adapter options
From the Network Connections pop up window, right click on your NIC and select Properites
Select the Configure... button and navigate to the Advanced tab
For the Speed and Duplex property, you'll want to change this to the highest throughput of your NIC. If you have a 10Gbps NIC, you'll want to make sure that 10Gbps Full Duplex is selected. This property allows the NIC to operate at it's full range. If this setting is not altered to Full, Windows has the tendency to throttle the NIC throughput causing a 10Gbps NIC to only be sending data at 2Gbps.
Interrupt Moderation allows the NIC to moderate interrupts. When there is a significant amount of data being uplinked to Motive, this can cause more interrupts to occur thus hindering the system performance. You'll want to Disable this property.
After the above properties have been applied, the NIC will need to go through a reboot process. This process is automatic, however, it will make it appear that your camera network is down for a few minutes. This is normal and once the NIC is rebooted, should begin to work as expected.
Although not recommended, you may use a laptop PC to run Prime Color Camera system. When using a laptop PC, you'll need to use an external network adapter for. The above settings will typically not apply to these types of adapters, so no properties will need to changed.
It is important to use a Thunderbolt port adapter with corresponding Thunderbolt ports on your laptop as opposed to a standard USB-C adapters/ports.
By default this value is set to 50, however, depending on the specifications of your particular system this value may need to be lower or can be raised higher so long as your system can handle the increased data output.
By default this value is set to full resolution of 1920 x 1080p. Typically you will not need to alter this setting.
It is recommended to close the Camera's View during recording. This further stabilizes Motive minimizing lag and less frame drops.
With an optimized system setup, motion capture systems are capable of obtaining extremely accurate tracking data from a small to medium sized capture volume. This quick start guide includes general tips and suggestions on precision capture system setups and important cautions to keep in mind. This page also covers some of the precision verification methods in Motive. For more general instructions, please refer to the Quick Start Guide: Getting Started or corresponding workflow pages.
Before going into details on precision tracking with an OptiTrack system, let's start with a brief explanation of the residual value, which is the key reconstruction output for monitoring the system precision. The residual value is an average offset distance, in mm, between the converging rays when reconstructing a marker; hence indicating preciseness of the reconstruction. A smaller residual value means that the tracked rays converge more precisely and achieve more accurate 3D reconstruction. A well-tracked marker will have a sub-millimeter average residual value. In Motive, the tolerable residual distance is defined from the Reconstruction Settings under the Application Settings panel.
When one or more markers are selected in the Live mode or from the 2D Mode of capture data, the corresponding mean residual value is displayed over the Status Panel located at the bottom-right corner of Motive.
First of all, optimize the capture volume for the most precise and accurate tracking results. Avoid a populated area when setting up the system and recording a capture. Clear any obstacles or trip hazards around the capture volume. Physical impacts on the setup will distort the calibration quality, and it could be critical especially when tracking at a sub-millimeter accuracy. Lastly, for best results, routinely recalibrate the capture volume.
Motion capture cameras detect reflected infrared light. Thus, having other reflective objects in the volume will alter the results negatively, which could be critical especially for precise tracking applications. If possible, have background objects that are IR black and non-reflective. Capturing in a dark background provides clear contrast between bright and dark pixels, which could be less distinguishable in a white background.
Optimized camera placement techniques will greatly improve the tracking result and the measurement accuracy. The following guide highlights important setup instructions for the small volume tracking. For more details on general system setup, read through the Hardware Setup pages.
Mounting Locations
For precise tracking, better results will be obtained by placing cameras closer to the target object (adjusting focus will be required) in a sphere or dome-shaped camera arrangement, as shown in the images on the right. Good positional data in all dimensions (X, Y, and Z axis) will be attained only if there are cameras contributing to the calculation from a variety of different locations; each unique vantage adds additional data.
Mount Securely
For most accurate results, cameras should be perfectly stationary, securely fastened onto a truss system or an extremely rigid object. Any slight deformation or fluctuation to the mount structures may affect the result in sub-millimeter tracking applications. A small-sized truss system is ideal for the setup. Take extreme caution when mounting onto speed rails attached to a wall, because the building may fluctuate on hot days.
Increase the f-stop higher (smaller aperture) to gain a larger depth of field. Increased depth of field will make the greater portion of the capture volume in-focus and will make measurements more consistent throughout the volume.
Especially for close-up captures, camera aim and focus should be adjusted precisely. Aim the cameras towards the center of the capture volume. Optimize the camera focus by zooming into a marker in Motive, and rotating the focus knob on the camera until the smallest marker is captured with clearest image contrast. To zoom in and out from the camera view, place the mouse cursor over the 2D camera preview window in Motive and use the mouse-scroll.
For more information, please read through the Aiming and Focusing workflow page.
The following sections cover key configuration settings which need to be optimized for the precision tracking.
Camera settings are configured using the Devices pane and the Properties pane both of which can be opened under the view tab in Motive.
Details
Number
Varies
Denotes the number that Motive has assigned to that particular camera.
Device Type
Varies
Denotes which type of camera Motive has detected (PrimeX 41, PrimeX 13W, etc.)
Serial Number
Varies
Denotes the serial number of the camera. This information uniquely identifies the camera.
Focal Length
Varies
Denotes the distance between the camera's image sensor and its lens.
General
Enabled
Toggle 'On'
When Enabled is toggled on, the camera is active and able to collect marker data.
Rate
Maximum FPS
Set the system frame rate (FPS) to its maximum value. If you wish to use slower frame rate, use the maximum frame rate during calibration and turn it down for the actual recording.
Reconstruction
Toggle 'On'
Denotes when camera is participating in 3D construction.
Rate Multiplier
x1 (120Hz)
Denotes the rate multiplier. This setting is for syncing external devices with the camera system
Exposure
250 μs
Denotes the exposure of the camera. The higher the number, the more microseconds a camera's sensor is exposed to light. If you're having issue with seeing markers, raise the exposure. If there is too much reflection data in the volume, lower the exposure.
Threshold (THR)
200
Do not bother changing the Threshold (THR) or LED values, keep them at their default settings. The Values EXP and LED are linked so change only the EXP setting for brighter images. If you turn the EXP higher than 250, make sure to wand extra slow to avoid blurred markers.
LED
Toggle 'On'
In some instances you may want to turn off the IRLEDs on a particular camera. i.e. using an active wand for calibration reduces extraneous reflections from influencing a calibration.
Video Mode
Default: Object Mode
IR Filter
Toggle 'On'
*Special to PrimeX 13/13W, SlimX 13, and Prime Color FS cameras. Toggles from using 850 nm IR filter which allows for only 850 nm IR light to be visible. When toggled off, all light will be visible to the camera's image sensor.
Gain
1: Low (Short Range)
Set the Gain setting to low for all cameras. Higher gain settings will amplify noise in the image.
Display
Show Field of View
Toggle 'Off'
Show Frame Delivery Info
Toggle 'Off'
Live-reconstruction settings can be configured under the application settings panel. These settings determine which data gets reconstructed into the 3D data, and when needed, you can adjust the filter thresholds to prevent any inaccurate data from reconstructing. Read through the Application Settings page for more details on each setting. For the precision tracking applications, the key settings and the suggested values are listed below:
< 2.00
Solver Tab: Minimum Rays to Start
≥ 3
Set the minimum required number of rays higher. More accurate reconstruction will be achieved when more rays converge within the allowable residual offset.
Camera Tab: Minimum Pixel Threshold
≥ 3
Since cameras are placed more close to the tracked markers, each marker will appear bigger in camera views. The minimum number of threshold pixels can be increased to filter out small extraneous reflections if needed.
Camera Tab: Circularity
≥ 3
The following calibration instructions are specific to precision tracking. For more general information, refer to the Calibration page.
For calibrating small capture volumes for precision tracking, we recommend using a Micron Series wand, either the CWM-250 or CWM-125. These wands are made of invar alloy, very rigid and insensitive to temperature, and they are designed to provide a precise and constant reference dimension during calibration. At the bottom of the wand head, there is a label which shows a factory-calibrated wand length with a sub-millimeter accuracy. In the Calibration pane, select Micron Series under the OptiWand dropdown menu, and define the exact length under the Wand Length.
The CW-500 wand is designed for capturing medium to large volumes, and it is not suited for calibrating small volumes. Not only it does not have the indication on the factory-calibrated length, but it is also made of aluminum, which makes it more vulnerable to thermal expansions. During the wanding process, Motive references the wand length for calibrating the capture volume, and any distortions in the wand length would cause the calibrated capture volume to be scaled slightly differently, which can be significant when capturing precise measurements. For this reason, a micron series wand is suitable for precision tracking applications.
Note: Never touch the marker on the CWM-250 or CWM-125 since any changes can affect the calibration and overall data.
Precision Capture Calibration Tips
Wand slowly. Waving the wand around quickly at high exposure settings will blur the markers and distort the centroid calculations, at last, reducing the quality of your calibration.
Avoid occluding any of the calibration markers while wanding. Occluding markers will reduce the quality of the calibration.
A variety of unique samples is needed to achieve a good calibration. Wand in a three-dimensional volume, wave the wand in a variety of orientations and throughout the volume.
Extra wanding in the target area you wish to capture will improve the tracking in the target region.
Wanding the edges of the volume helps improve the lens distortion calculations. This may cause Motive to report a slightly worse overall calibration report, but will provide better quality calibration; explained below.
Starting/stopping the calibration process with the wand in the volume may help avoid getting rough samples outside your volume when entering and leaving.
Calibration reports and analyzing the reported error is a complicated subject because the calibration process uses its own samples for validation. For example, sampling near the edge of the volume may improve the accuracy of the system but provide slightly worse calibration results. This is because the samples near the edge will have more errors to be corrected. Acceptable mean error varies based on the size of your volume, the number of cameras, and desired accuracy. The key metrics to keep an eye on are the Mean 3D Error for the Overall Reprojection and the Wand Error. Generally, use calibrations with the Mean 3D Error less than 0.80 mm and the Wand Error less than 0.030 mm. These numbers may be hard to reproduce in regular volumes. Again, the acceptable numbers are subjective, but lower numbers are better in general.
In general, passive retro-reflective markers will provide better tracking accuracy. The boundary of the spherical marker can be more clearly distinguished on passive markers, and the system can identify an accurate position of the marker centroids. The active markers, on the other hand, emit light and the illumination may not appear as spherical on the camera view. Even if a spherical diffuser is used, there can be situations where the light is not evenly distributed. This could provide inaccurate centroid data. For this reason, passive markers are preferred for precision tracking applications.
For close-up capture, it could be inevitable to place markers close to one another, and when markers are placed in close vicinity, their reflections may be merged as seen by the camera’s imager. Merged reflections will have an inaccurate centroid location, or they may even be completely discarded by the circularity filter or the intrusion detection feature. For best results, keep the circularity filter at a higher setting (>0.6) and decrease the intrusion band in the camera group 2D filter settings to make sure only relevant reflections are reconstructed. The optimal balance will depend on the number and arrangement of the cameras in the setup.
There are editing methods to discard or modify the missing data. However, for most reliable results, such marker intrusions should be prevented before the capture by separating the marker placements or by optimizing the camera placements.
Once a Rigid Body is defined from a set of reconstructed points, utilize the Rigid Body Refinement feature to further refine the Rigid Body definition for precision tracking. The tool allows Motive to collect additional samples in the live mode for achieving more accurate tracking results.
In a mocap system, camera mount structures and other hardware components may be affected by temperature fluctuations. Refer to linear thermal expansion coefficient tables to examine which materials are susceptible to temperature changes. Avoid using a temperature sensitive material for mounting the cameras. For example, aluminum has relatively high thermal expansion coefficient, and therefore, mounting cameras onto aluminum mounting structures may distort the calibration quality. For best accuracy, routinely recalibrate the capture volume, and take the temperature fluctuation into an account both when selecting the mount structures and before collecting data.
An ideal method of avoiding influence from environmental temperature is to install the system in a temperature controlled volume. If such option is unavailable, routinely calibrate the volume before capture, and recalibrate the volume in between sessions when capturing for a long period. The effects are especially noticeable on hot days and will significantly affect your results. Thus, consistently monitor the average residual value and how well your rays converge to individual markers.
The cameras will heat up with extended use, and change in internal hardware temperature may also affect the capture data. For this reason, avoid capturing or calibrating right after powering the system. Tests have found that the cameras need to be warmed up in Live mode for about an hour until it reaches a stable temperature. Typical stable temperatures are between 40-50 degrees Celsius or 25 degree Celsius above the ambient temperature. For Ethernet camera models, camera temperatures can be monitored from the Cameras View in Motive (Cameras View > Eye Icon > Camera Info).
If a camera exceeds 80 degrees Celsius, this can be a cause for concern. It can cause frame drops and potential harm to the camera. If possible, keep the ambient temperature as low, dry, and consistent as possible.
Especially for measuring at sub-millimeters, even a minimal shift of the setup can affect the recordings. Re-calibrate the capture volume if your average residual values start to deviate. In particular, watch out for the following:
Avoid touching the cameras and the camera mounts.
Keep the capture area away from heavy foot traffic. People shouldn't be walking around the volume while the capture is taking place.
Closing doors, even from the outside, may be noticeable during recording.
The following methods can be used to check the tracking accuracy and to better optimize the reconstructions settings in Motive.
The calibration quality can also be analyzed by checking the convergence of the tracked rays into a marker. This is not as precise as the first method, but the tracked rays can be used to check the calibration quality of multiple cameras at once. First of all, make sure tracked rays are visible; Perspective View pane > Eye button > Tracked Rays. Then, select a marker in the perspective view pane. Zoom all the way into the marker (you may need to zoom into the sphere), and you will be able to see the tracking rays (green) converging into the center of the marker. A good calibration should have all the rays converging into approximately one point, as shown in the following image. Essentially, this is a visual way of examining the average residual offset of the converging rays.
In Motive 3.0, a new feature was introduced called Continuous Calibration. This can aid in keeping your precision for longer in between calibrations. For more information regarding continuous calibration please refer to our Wiki page Continuous Calibration.
This wiki contains instructions on operating OptiTrack motion capture systems. If you are new to the system, start with the Quick Start Guides to begin your capture experience.
You can navigate through pages using links in the sidebar or using links included within the pages. You can also use the search bar provided on the top-right corner to search for page names and keywords that you are looking for. If you have any questions that are not documented in this wiki or from other provided documentation, please check our forum or contact our Support for further assistance.
OptiTrack website: http://www.optitrack.com
The Helpdesk: http://help.naturalpoint.com
NaturalPoint Forums: https://forums.naturalpoint.com
For versions of Motive 2.2 or older, please visit our old wiki site.
Markersets
PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models have powerful tracking capability that allows tracking outdoors. With strong infrared (IR) LED illuminations and some adjustments to its settings, a Prime system can overcome sunlight interference and perform 3D capture. This page provides general hardware and software system setup recommendations for outdoor captures.
Please note that when capturing outdoors, the cameras will have shorter tracking ranges compared to when tracking indoors. Also, the system calibration will be more susceptible to change in outdoor applications because there are environmental variables (e.g. sunlight, wind, etc.) that could alter the system setup. To ensure tracking accuracy, routinely re-calibrate the cameras throughout the capture session.
Even though it is possible to capture under the influence of the sun, it is best to pick cloudy days for captures in order to obtain the best tracking results. The reasons include the following:
Bright illumination from the daylight will introduce extraneous reconstructions, requiring additional effort in the post-processing on cleaning up the captured data.
Throughout the day, the position of the sun will continuously change as will the reflections and shadows of the nearby objects. For this reason, the camera system needs to be routinely re-masked or re-calibrated.
The surroundings can also work to your advantage or disadvantage depending on the situation. Different outdoor objects reflect 850 nm Infrared (IR) light in different ways that can be unpredictable without testing. Lining your background with objects that are black in Infrared (IR) will help distinguish your markers from the background better which will help with tracking. Some examples of outdoor objects and their relative brightness is as follows:
Grass typically appears as bright white in IR.
Asphalt typically appears dark black in IR.
Concrete depends, but it's usually a gray in IR.
1. [Camera Setup] Camera mount setup
In general, setting up a truss system for mounting the cameras is recommended for stability, but for outdoor captures, it could be too much effort to do so. For this reason, most outdoor capture applications use tripods for mounting the cameras.
2. [Camera Setup] Camera aim
Do not aim the cameras directly towards the sun. If possible, place and aim the cameras so that they are capturing the target volume at a downward angle from above.
3. [Camera Setup] Lens f-stop
Increase the f-stop setting in the Prime cameras to decrease the aperture size of the lenses. The f-stop setting determines the amount of light that is let through the lenses, and increasing the f-stop value will decrease the overall brightness of the captured image allowing the system to better accommodate for sunlight interference. Furthermore, changing this allows camera exposures to be set to a higher value, which will be discussed in the later section. Note that f-stop can be adjusted only in PrimeX 41, PrimeX 22, Prime 41*, and Prime 17W* camera models.
4. [Camera Setup] Utilize shadows
Even though it is possible to capture under sunlight, the best tracking result is achieved when the capture environment is best optimized for tracking. Whenever applicable, utilize shaded areas in order to minimize the interference by sunlight.
1. [Camera Settings] Max IR LED Strength
Increase the LED setting on the camera system to its maximum so that IR LED illuminates at its maximum strength. Strong IR illumination will allow the cameras to better differentiate the emitted IR reflections from ambient sunlight.
2. [Camera Settings] Camera Exposure
In general, increasing camera exposure makes the overall image brighter, but it also allows the IR LEDs to light up and remain at its maximum brightness for a longer period of time on each frame. This way, the IR illumination is stronger on the cameras, and the imager can more easily detect the marker reflections in the IR spectrum.
When used in combination with the increased f-stop on the lens, this adjustment will give a better distinction of IR reflections. Note that this setup applies only for outdoor applications, for indoor applications, the exposure setting is generally used to control overall brightness of the image.
*Legacy camera models
\
Welcome to the Quick Start Guide: Getting Started!
This guide provides a quick walk-through of installing and using OptiTrack motion capture systems. Key concepts and instructions are summarized in each section of this page to help you get familiarized with the system and get you started with the capture experience.
Note that Motive offers features far beyond the ones listed in this guide, and the capability of the system can be further optimized to fit your specific capture applications using the additional features. For more detailed information on each workflow, read through the corresponding workflow pages in this wiki: hardware setup and software setup.
For best tracking results, you need to prepare and clean up the capture environment before setting up the system. First, remove unnecessary objects that could block the camera views. Cover open windows and minimize incoming sunlight. Avoid setting up a system over reflective flooring since IR lights from cameras may get reflected and add noise to the data. If this is not an option, use rubber mats to cover the reflective area. Likewise, items with reflective surfaces or illuminating features should be removed or covered with non-reflective materials in order to avoid extraneous reflections.
Key Checkpoints for a Good Capture Area
Minimize ambient lights, especially sunlight and other infrared light sources.
Clean capture volume. Remove unnecessary obstacles within the area.
Tape, or Cover, remaining reflective objects in the area.
See Also: Hardware Setup workflow pages.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the camera to stall in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the LLDP (PoE+) Detection setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.
See Also: Network setup page.
Optical motion capture systems utilize multiple 2D images from each camera to compute, or reconstruct, corresponding 3D coordinates. For best tracking results, cameras must be placed so that each of them captures unique vantage of the target capture area. Place the cameras circumnavigating around the capture volume, as shown in the example below, so that markers in the volume will be visible by at least two cameras at all times. Mount cameras securely onto stable structures (e.g. truss system) so that they don't move throughout the capture. When using tripods or camera stands, ensure that they are placed in stable positions. After placing cameras, aim the cameras so that their views overlap around the region where most of the capture will take place. Any significant camera movement after system calibration may require re-calibration. Cable strain-relief should be used at the camera end of camera cables to prevent potential damage to the camera.
See Also: Camera Placement and Camera Mount Structures pages.
In order to obtain accurate and stable tracking data, it is very important that all of the cameras are correctly focused to the target volume. This is especially important for close-up and long-range captures. For common tracking applications in general, focus-to-infinity should work fine, however, it is still important to confirm that each camera in the system is focused.
To adjust or to check camera focus, place some markers on the target tracking area. Then, set the camera to raw grayscale mode, increase the exposure and LED settings, and then Zoom onto one of the retroreflective markers in the capture volume and check the clarity of the image. If the image is blurry, adjust the camera focus and find the point where the marker is best resolved.
See Also: Aiming and Focusing page.
In order to properly run a motion capture system using Motive, the host PC must satisfy the minimum system requirements. Required minimum specifications vary depending on sizes of mocap systems and types of cameras used. Consult our Sale Engineers, or use the Build Your Own feature on our website to find out host PC specification requirements.
Motive is a software platform designed to control motion capture systems for various tracking applications. Motive not only allows the user to calibrate and configure the system, but it also provides interfaces for both capturing and processing of 3D data. The captured data can be recorded or live-streamed into other pipelines.
If you are new to Motive, we recommend you to read through Motive Basics page after going through this guide to learn about basic navigation controls in Motive.
Motive Activation Requirements
The following items will be required for activating Motive. Please note that the valid duration of the Motive license must be later than the release date of the version that you are activating. If the license is expired, please update the license or use an older version of Motive that was released prior to the license expiration date.
Motive 3.x license
USB Security Key
Host PC Requirements
Required PC specifications may vary depending on the size of the camera system. Generally, you will be required to use the recommended specs with a system with more than 24 cameras.
OS: Windows 10, 11 (64-bit)
CPU: Intel i7 or better, 3+ GHz
RAM: 16GB of memory
GPU: GTX 1050 or better with the latest drivers and support for OpenGL 3.2+
OS: Windows 10, 11 (64-bit)
CPU: Intel i7, 3+ GHz
RAM: 8GB of memory
GPU that supports OpenGL 3.2+
Download and Install
To install Motive, simply download the Motive software installer for your operating system from the Motive Download Page, then run the installer and follow its prompts.
Note: Anti-virus software can interfere with Motive's ability to communicate with cameras or other devices, and it may need to be disabled or configured to allow the device communication to properly run the system.
License Activation Steps
Insert the USB Security Key into a USB-C port on the computer. If needed, you can also use a USB-A adapter to connect.
Launch Motive
Activate your software using the License Tool, which can be accessed in the Motive splash screen. You will need to input the License Serial Number and the Hash Code for your license.
After activation, the License tool will place the license file associated to the USB Security Key in the License folder. For more license activation questions, visit Licensing FAQs or contact our Support.
Notes on using USB Security Key
When connecting the USB Security Key into the computer, please avoid sharing the USB card with other USB devices that may transmit a large amount of data frequently. For example, if you have external devices (e.g. Force Plates, NI-DAQ) that communicates via USB, connect those devices onto a separate USB card so that they don't interfere with the Security Key.
USB Hardware Key from older version of Motive have been replaced by USB Security Key in Motive 3.x version or above.
Notes on First Connection with a USB Security Key
The Security Key must register with the connected cameras upon initial activation, or when one or more cameras are added to an existing system. This process requires that the host PC be connected to the internet and may take a few minutes. Once the cameras have been registered, an internet connection is no longer required.
By default, Motive will start on the calibration layout with all the necessary panes open. Using this layout, you can calibrate the camera system and construct a 3D tracking volume. The layout may be slightly different for certain camera models or software licenses.
The following panes will be open:
and recorded Takes to view and configure their properties.
The Control Deck, located at bottom of Motive, is where you can control recording (Live Mode) or playback (Edit Mode) of capture data. In the Live mode, you can use the control deck to start recording and assign filename for the capture. In the Edit mode, you can use this pane to control the playback of recorded Take(s).
See Also: List of UI pages from the Motive section of the wiki.
Use the following controls for navigating throughout the 2D and 3D viewports in Motive. Most of the navigation controls are customizable, including both mouse actions and hotkeys. These mouse and keyboard controls can be customized through the Application Settings panel.
Rotate view
Right + Drag
Pan view
Middle (wheel) click + drag
Zoom in/out
Mouse Wheel
Select in View
Left mouse click
Toggle selection in View
CTRL + left mouse click
Show one viewport
Shift + 1
Horizontally split the viewport
Shift + 2
Now that the cameras are connected and showing up in Motive, the next step is to configure the camera settings. Appropriate camera settings will vary depending on various factors including the capture environment and tracked objects. The overall goal is to configure the settings so that the marker reflections are clearly captured and distinguished in the 2D view of each camera. For a detailed explanation on individual settings, please refer to the Devices pane page.
To check whether the camera setting is optimized, it is best to check both the grayscale mode images and tracking mode (Object or Precision) images and make sure the marker reflection stands out from the image. You switch a camera into grayscale mode either in Motive or by using the Aim Assist button for supported cameras. In Motive, you can right-click on the Cameras Viewport and switch the video mode in the context menu, or you can also change the video mode through the Properties pane.
Exposure Setting
The exposure setting determines how long the camera imagers are exposed per each frame of data. With longer the exposure, more light will be captured by the camera, creating the brighter images that can improve visibility for small and dim markers. However, high exposure values can introduce false markers, larger marker blooms, and marker blurring – all of which can negatively impact marker data quality. It is best to minimize the exposure setting as long as the markers are clearly visible in the captured images.
Tip: For the calibration process, click the Layout → Calibrate menu (CTRL + 1) to access the calibration layout.
In order to start tracking, all cameras must first be calibrated. Through the camera calibration process, Motive computes position and orientation of cameras (extrinsic) as well as amounts of lens distortions in captured images (intrinsics). Using the calibration results, Motive constructs a 3D capture volume, and within this volume, motion tracking is accomplished. All of the calibration tools can be found under the Calibration pane. Read through the Calibration page to learn about the calibration process and what other tools are available for more efficient workflows.
See Also: Calibration page.
Duo/Trio Tracking Bars: The camera calibration is not needed for Duo/Trio Tracking bars. The cameras are pre-calibrated using the fixed camera placements. This allows the tracking bars to work right out of the box without the calibration process. To adjust the ground plane, used the Coordinate System Tools in Motive.
Starting a Calibration
To start a system calibration, open the Calibration Pane. Under the Calibration pane, you can choose to start a new calibration or to modify the existing one. For this guide, click New Calibration for a fresh calibration.
Masking
Before the system calibration, any extraneous reflections or unnecessary markers should ideally be removed or covered so that they are not seen by the cameras. However, it may not always be possible to remove all of them. In this case, these extraneous reflections can be ignored by applying masks over them during the calibration.
Check the calibration pane to see if any of the cameras are seeing extraneous reflections or noise in their view. A warning sign will appear over these cameras.
Check the camera view of the corresponding camera to identify where the extraneous reflection is coming from, and if possible, remove them from the capture volume or cover them so that the cameras do not see them.
If reflections still exist, click Mask to automatically apply masks over all of the reflections detected in the camera views.
Once all of the reflections have been masked or removed, click Continue to proceed to the wanding step.
Wanding
In the wanding stage, we will use the Calibration Wand to collect wanding samples that will be used for calibrating the system.
Set the Calibration Type to Full.
Under the Wand settings, specify the wand that you will be used to calibrate the volume. It is very important to input the matching wand size here. When an incorrect dimension is given to Motive, the calibrated 3D volume will be scaled incorrectly.
Click Start Wanding to start collecting the wanding sample.
Once the wanding process starts. Bring your calibration wand into the capture volume and start waving the wand gently across the entire capture volume. Gently draw figure-eight repetitively with the wand to collect samples at varying orientations and cover as much space as possible for sufficient sampling. Wanding trails will be shown in colors on the 2D View. A grid/table displaying the status of the wanding process will show up in the Calibration pane to monitor the progress.
As each camera collects the wanding samples, the camera grid representing the wanding status of each camera will start changing its color to bright green. This provides visual feedback on whether sufficient samples have been collected by each camera. Wave the wand until all boxes are filled with bright green color.
Once enough samples have been collected, press the Start Calculation button to start calibrating. The calculation may take a few minutes to complete.
When the calculation is finished, its results will get displayed. If the overall result is acceptable, click Continue to proceed to setting up the ground. If the result is not satisfactory, click Cancel and go through the wanding once more.
Wanding tips
For best results, collect wand samples evenly and comprehensively throughout the volume, covering both low and high elevations. If you wish to start calibrating inside the volume, cover one of the markers and expose it wherever you wish to start wanding. When at least two cameras detect all the three markers while no other reflections are present in the volume, the wand will be recognized, and Motive will start collecting samples.
Sufficient sample count for the calibration may vary for different sized volumes, but in general, collect 2500 ~ 6000 samples for each camera. Once a sufficient number of samples has been collected, press the button under the Calibration section.
During the wanding process, each camera needs to see only the 3-markers on the calibration wand. If any of the cameras are detecting extraneous reflections, go back to the masking step to mask them.
Setting the Ground Plane
Now that all of the cameras have been calibrated, the next step is to define the ground plane of the capture volume.
Place a Calibration Square inside the capture volume. Position the square so that the vertex marker is placed directly over the desired global origin.
Orient the calibration square so that the longer arm is directed towards the desired +Z axes and the shorter arm is directed towards the desired +X axes of the volume. Motive uses the y-up right-hand coordinate system.
Level the calibration square parallel to the ground plane.
At this point, the Calibration pane should detect which calibration square has been placed in the tracking volume. If not, you may want to specifically select the three markers on the calibration square from the 3D view in Motive.
Click Set Ground Plane to complete the calibration.
Once the camera system has been calibrated, Motive is ready to collect data. But before doing so, let's prepare the session folders for organizing the capture recordings and define the trackable assets, including Rigid Body and/or Skeletons.
Motive Recordings
See Also: Motive Basics page.
Motive Profiles
Motive's software configurations are saved to Motive Profiles (*.motive extension). All of the application-related settings can be saved into the Motive profiles, and you can export and import these files and easily maintain the same software configurations.
Place the retro-reflective markers onto subjects (Rigid Body or Skeleton) that you wish to track. Double-check that the markers are attached securely. For skeleton tracking, open the Builder pane, go to skeleton creation options, and choose a marker set you wish to use. Follow the skeleton avatar diagram for placing the markers. If you are using a mocap suit, make sure that the suit fits as tightly as possible. Motive derives the position of each body segment from related markers that you place on the suit. Accordingly, it is important to prevent the shifting of markers as much as possible. Sample marker placements are shown below.
See Also: Markers page for marker types, or Rigid Body Tracking and Skeleton Tracking page for placement directions.
Tip: For creating trackable assets, click the Layout → Create menu item to access the model creation layout.
Create Rigid Body
To define a Rigid Body, simply select three or more markers in the Perspective View, right-click, and select Rigid Body → Create Rigid Body From Selected. You can also utilize CTRL+T hotkey for creating Rigid Body assets. You can also use the Builder pane to define the Rigid Body.
Create Skeleton
To define a skeleton, have the actor enter the volume with markers attached at appropriate locations. Open the Builder pane and select Skeleton and Create. Under the marker set section, select a marker set you wish to use, and a corresponding model with desired marker locations will be displayed. After verifying that the marker locations on the actor correspond to those in the Builder pane, instruct the actor to strike the calibration pose. Most common calibration pose used is the T-pose. The T-pose requires a proper standing posture with back straight and head looking directly forward. Then, both arms are stretched to sides, forming a “T” shape. While in T-pose, select all of the markers within the desired skeleton in the 3D view and click Create button in the Builder pane. In some cases, you may not need to select the markers if only the desired actor is in view.
See Also: Rigid Body Tracking page and Skeleton Tracking page.
Tip: For recording capture, access the Layout → Capture menu item, or the to access the capture layout
Once the volume is calibrated and skeletons are defined, now you are ready to capture. In the Control Deck at the bottom, press the dimmed red record button or simply press the spacebar when in the Live mode to begin capturing. This button will illuminate in bright red to indicate recording is in progress. You can stop recording by clicking the record button again, and a corresponding capture file (TAK extension), also known as capture Take, will be saved within the current session folder. Once a Take has been saved, you can playback captures, reconstruct, edit, and export your data in a variety of formats for additional analysis or use with most 3D software.
When tracking skeletons, it is beneficial to start and end the capture with a T-pose. This allows you to recreate the skeleton in post-processing when needed.
See Also: Data Recording page.
After capturing a Take. Recorded 3D data and its trajectories can be post-processed using the Data Editing tools, which can be found in the Edit Tools pane. Data editing tools provide post-processing features such as deleting unreliable trajectories, smoothing select trajectories, and interpolating missing (occluded) marker positions. Post-editing the 3D data can improve the quality of tracking data.
Tip: For data editing, access the Layout → Edit menu item, or the to access the capture layout
General Editing Steps
Skim through the overall frames in a Take to get an idea of which frames and markers need to be cleaned up.
Refer to the Labels pane and inspect gap percentages in each marker.
Select a marker that is often occluded or misplaced.
Look through the frames in the Graph pane, and inspect the gaps in the trajectory.
For each gap in frames, look for an unlabeled marker at the expected location near the solved marker position. Re-assign the proper marker label if the unlabeled marker exists.
Use Trim Tails feature to trim both ends of the trajectory in each gap. It trims off a few frames adjacent to the gap where tracking errors might exist. This prepares occluded trajectories for Gap Filling.
Find the gaps to be filled, and use the Fill Gaps feature to model the estimated trajectories for occluded markers.
Re-Solve assets to update the solve from the edited marker data
Markers detected in the camera views get trajectorized into 3D coordinates. The reconstructed markers need to be labeled for Motive to distinguish different trajecectories within a capture. Trajectories of annotated reconstructions can be exported individually or used (solved altogether) to track the movements of the target subjects. Markers associated with Rigid Bodies and Skeletons are labeled automatically through the auto-labeling process. Note that Rigid Body and Skeleton markers can be auto-labeled both during Live mode (before capture) and Edit mode (after capture). Individual markers can also be labeled, but each marker needs to be manually labeled in post-processing using assets and the Labeling pane. These manual Labeling tools can also be used to correct any labeling errors. Read through the Labeling page for more details in assigning and editing marker labels.
Auto-label: Automatically label sets of Rigid Body markers and skeleton markers using the corresponding asset definitions.
Manual Label: Labeling individual markers manually using the Labeling, assigning labels defined in the Marker Set, Rigid Body, or Skeleton assets.
See Also: Labeling page.
Changing Marker Labels and Colors
When needed, you can use the Constraints pane to adjust marker labels for both Rigid Body and Skeleton markers. You can also adjust markers sticks and marker colors as needed.
Motive exports reconstructed 3D tracking data in various file formats, and exported files can be imported into other pipelines to further utilize capture data. Supported formats include CSV and C3D for Motive: Tracker, and additionally, FBX, BVH, and TRC for Motive: Body. To export tracking data, select a Take to export and open the export dialog window, which can be accessed from File → Export Tracking Data or right-click on a Take → Export Tracking data from the Data pane. Multiple Takes can be selected and exported from Motive or by using the Motive Batch Processor. From the export dialog window the frame rate, measurement scale, and frame range of exported data can be configured. Frame ranges can also be specified by selecting a frame range in the Graph View pane before exporting a file. In the export dialog window, corresponding export options are available for each file format.
See Also: Data Export page.
Motive offers multiple options to stream tracking data onto external applications in real-time. Tracking data can be streamed in both Live mode and Edit mode. Streaming plugins are available for Autodesk Motion Builder, Visual3D, The MotionMonitor, Unreal Engine 4, 3ds Max, Maya (VCS), and VRPN, and they can be downloaded from the OptiTrack website. For other streaming options, the NatNet SDK enables users to build custom client and server applications to stream capture data. Common motion capture applications rely on real-time tracking, and the OptiTrack system is designed to deliver data at an extremely low latency even when streaming to third-party pipelines. Detailed instructions on specific streaming protocols are included in the PDF documentation that ships with the respective plugins or SDK's.
See Also: Data Streaming page
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
This page provides instructions on how to set up and use the OptiTrack active marker solution.
Additional Note
This solution is supported for Ethernet camera systems (Slim 13E or Prime series cameras) only. USB camera systems are not supported.
Motive version 2.0 or above is required.
This guide covers active component firmware versions 1.0 and above; this includes all active components that were shipped after September 2017.
The OptiTrack Active Tracking solution allows synchronized tracking of active LED markers using an OptiTrack camera system. Consisting of the Base Station and the users choice Active Tags that can be integrated in to any object and/or the "Active Puck" which can act as its own single Rigid Body.
Connected to the camera system the Base Station emits RF signals to the active markers, allowing precise synchronization between camera exposure and illumination of the LEDs. Each active marker is now uniquely labeled in Motive software, allowing more stable Rigid Body tracking since active markers will never be mislabeled and unique marker placements are no longer be required for distinguishing multiple Rigid Bodies.
Sends out radio frequency signals for synchronizing the active markers.
Powered by PoE, connected via Ethernet cable.
Must be connected to one of the switches in the camera network.
Connects to a USB power source and illuminates the active LEDs.
Receives RF signals from the Base Station and correspondingly synchronizes illumination of the connected active LED markers.
Emits 850 nm IR light.
4 active LEDs in each bundle and up to two bundles can be connected to each Tag.
(8 Active LEDs (4(LEDs/set) x 2 set) per Tag)
Size: 5 mm (T1 ¾) Plastic Package, half angle ±65°, typ. 12 mW/sr at 100mA
An active tag self-contained into a trackable object, providing information with 6 DoF for any arbitrary object that it's attached to. Carries a factory installed Active Tag with 8 LEDs and a rechargeable battery with up to 10-hours of run time on a single charge.
Connects to one of the PoE switches within the camera network.
For best performance, place the base station near the center of your tracking space, with unobstructed lines of sight to the areas where your Active Tags will be located during use. Although the wireless signal is capable of traveling through many types of obstructions, there still exists the possibility of reduced range as a result of interference, particularly from metal and other dense materials.
Do not place external electromagnetic or radiofrequency devices near the Base Station.
When Base Station is working properly, the LED closest to the antenna should blink green when Motive is running.
BaseStation LEDs
Note: Behavior of the LEDs on the base station is subject to be changed.
Communication Indicator LED: When the BaseStation is successfully sending out the data and communicating with the active pucks, the LED closest to the antenna will blink green. If this LED lights is red, it indicates that the BaseStation has failed to establish a connection with Motive.
Interference Indicator LED: The middle LED is an indicator for determining whether if there are other signal-traffics on the respective radio channel and PAN ID that might be interfering with the active components. This LED should stay dark in order for the active marker system to work properly. If it flashes red, consider switching both the channel and PAN ID on all of the active components.
Power Indicator LED: The LED located at the corner, furthest from the antenna, indicates power for the BaseStation.
Connect two sets of active markers (4 LEDs in each set) into a Tag.
Connect the battery and/or a micro USB cable to power the Tag. The Tag takes 3.3V ~ 5.0V of inputs from the micro USB cable. For powering through the battery, use only the batteries that are supplied by us. To recharge the battery, have the battery connected to the Tag and then connect the micro USB cable.
To initialize the Tag, press on the power switch once. Be careful not to hold down on the power switch for more than a second, because it will trigger to start the device in the firmware update (DFU) mode. If it initializes in the DFU mode, which is indicated by two orange LEDs, just power off and restart the Tag. To power off the Tag, hold down on the power switch until the status LEDs go dark.
Once powered, you should be able to see the illumination of IR LEDs from the 2D reference camera view.
Puck Setup
Press the power button for 1~2 seconds and release. The top-left LED will illuminate in orange while it initializes. Once it initializes the bottom LED will light up green if it has made a successful connection with the base station. Then the top-left LED will start blinking in green indicating that the sync packets are being received.
Active Pattern Depth
Settings → Live Pipeline → Solver Tab with Default value = 12
This adjusts the complexity of the illumination patterns produced by active markers. In most applications, the default value can be used for quality tracking results. If a high number of Rigid Bodies are tracked simultaneously, this value can be increased allowing for more combinations of the illumination patterns on each marker. If this value is set too low, duplicate active IDs can be produced, should this error appear increase the value of this setting.
Minimum Active Count
Settings → Live Pipeline → Solver Tab with Default value = 3
Setting the number of rays required to establish the active ID for each on frame of an active marker cycle. If this value is increased, and active makers become occluded it may take longer for active markers to be reestablished in the Motive view. The majority of applications will not need to alter this setting.
Active Marker Color
Settings → Views → 3D Tab with Default color = blue
The color assigned to this setting will be used to indicate and distinguish active and passive markers seen in the viewer pane of Motive.
For tracking of the active LED markers, the following camera settings may need to be adjusted for best tracking results:
For tracking the active markers, set the camera exposures a bit higher compared to when tracking passive markers. This allows the cameras to better detect the active markers. The optimal value will vary depending on the camera system setups, but in general, you would want to set the camera exposure between 400 ~ 750, microseconds.
Rigid body definitions that are created from actively labeled reconstructions will search for specific marker IDs along with the marker placements to track the Rigid Body. Further explained in the following section.
Duplicate active frame IDs
For the active label to properly work, it is important that each marker has a unique active ID. When there are markers sharing the same ID, there may be problems when reconstructing those active markers. In this case, the following notification message will show up. If you see this notification, please contact support to change the active IDs on the active markers.
In recorded 3D data, the labels of the unlabeled active markers will still indicate that it is an active marker. As shown in the image below, there will be Active prefix assigned in addition to the active ID to indicate that it is an active marker. This applies only to individual active markers that are not auto-labeled. Markers that are auto-labeled using a trackable model will be assigned with a respective label.
When a trackable asset (e.g. Rigid Body) is defined using active markers, its active ID information gets stored in the asset along with marker positions. When auto-labeling the markers in the space, the trackable asset will additionally search for reconstructions with matching active ID, in addition to the marker arrangements, to auto-label a set of markers. This can add additional guard to the auto-labeler and prevents and mis-labeling errors.
Rigid Body definitions created from actively labeled reconstructions will search for respective marker IDs in order to solve the Rigid Body. This gives a huge benefit because the active markers can be placed in perfectly symmetrical marker arrangements among multiple Rigid Bodies and not run into labeling swaps. With active markers, only the 3D reconstructions with active IDs stored under the corresponding Rigid Body definition will contribute to the solve.
If a Rigid Body was created from actively labeled reconstructions, the corresponding Active ID gets saved under the corresponding Rigid Body properties. In order for the Rigid Body to be tracked, the reconstructions with matching marker IDs in addition to matching marker placements must be tracked in the volume. If the active ID is set to 0, it means no particular marker ID is given to the Rigid Body definition and any reconstructions can contribute to the solve.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
Changes video mode of the camera. For more information regarding camera video types, please see: .
When toggled on this shows the camera's field of view. This is particularly useful when upon setting up a camera volume.
When toggled on, this setting shows the frame delivery info for all the cameras in the system overlaid on the selected camera's
Solver Tab: [] Residual (mm)
Set the allowable value smaller for the precision volume tracking. Any offset above 2.00 mm will be considered as inaccurate, and the corresponding 2D data will be excluded from reconstruction contribution.
Increasing the circularity value will filter out non-marker reflections. Furthermore, it prevents collecting data from where the calculated centroid is no longer reliable.
First, go into the perspective view pane > select a marker, then go to the Camera Preview pane > Eye Button > Set Marker Centroids: True. Make sure the cameras are in the object mode, then zoom into the selected marker in the 2D view. The marker will have two crosshairs on it; one white and one yellow. The amount of offset between the crosshairs will give you an idea of how closely the calculated 2D centroid location (thicker white line) aligns with the reconstructed position (thinner yellow line). Switching between the grayscale mode and the object mode will make the errors more distinguishable. The below image is an example of a poor calibration. A good calibration should have the yellow and white lines closely aligning with each other.
Connected cameras will be listed under the . This panel is where we can configure settings (FPS, exposure, LED, and etc.) for each camera and decide whether to use selected cameras for 3D tracking or reference videos. Only the cameras that are set to tracking mode will contribute to reconstructing 3D coordinates. Cameras in capture grayscale images for reference purposes only. The Devices pane can be accessed under the View tab in Motive or by clicking icon on the main toolbar.
When an object is selected in Motive, all of its related properties will be listed under the . For example, when a is selected in the 3D viewport, its corresponding will get listed in this pane, and we can view the settings and configure them as needed.
Likewise, this pane is also used to view the properties of the cameras and any other connected devices that are listed in the .
This pane will be used in almost all of the workflows. The Devices pane can be accessed under the View tab in Motive or by clicking icon from the main toolbar.
The top is where 3D data is shown in Motive. Here, you can view and analyze 3D data within a calibrated capture volume. This panel will be used during the live capture and also in the playback of recorded data. In the perspective viewport, you can select any objects in the capture volume, use the context menu to perform actions, or use the to view and modify the associated properties.
You can use the dropdown menu at the top-left corner to switch between different viewports, and you can also use the button at the top-right corner to split the viewport into multiple. If desired, an additional View pane can be open by opening up a Viewer pane under the or by clicking icons on the main toolbar.
The bottom viewport is the Cameras viewport. Here, you can monitor the view of each camera in the system and apply . This pane is also used to examine markers, or IR lights, seen by the cameras in order to examine how the 2D data is processed and reconstructed into 3D coordinates.
The Calibration pane is used in camera calibration process. In order to compute 3D coordinates from captured 2D images, the camera system needs to be calibrated first. All tools necessary for calibration is included within the Calibration pane, and it can also be accessed under the or by clicking icon on the main toolbar.
Each capture recording will be saved in a Take (TAK) file and related Take files can be organized in session folders. Start your capture by first creating a new Session folder. Create a new folder in the desired directory of the host computer and load the folder onto the Data pane by either clicking on the icon OR just by drag-and-dropping them onto the data management pane. If no session folder is loaded, all of the recordings will be saved onto the default folder located in the user documents directory (Documents\OptiTrack\Default). All of the newly recorded Takes will be saved within the currently selected session folder which will be marked with the symbol.
USB camera models, including Flex series cameras and V120:Duo/Trio tracking bars, are currently not supported in Motive 3.0.x versions. For those systems, please refer to the .
This page is for the general specifications of the Prime Color camera. For details on how to setup and use the Prime color, please refer to the page in this wiki.
This guide is for only. Third-party IR LEDs will not work with instructions provided on this page.
For active components that were shipped prior to September 2017, please see the page for more information about the firmware compatibility.
Active tracking is supported only with the Ethernet camera system (Prime series or Slim 13E cameras). For instructions on how to set up a camera system see: .
For more information, please read through the page.
When tracking only active markers, the cameras do not need to emit IR lights. In this case, you can disable the IR settings in the .
With a BaseStation and Active Markers communicating on the same RF, active markers will be reconstructed and tracked in Motive automatically. From the unique illumination patterns, each active marker gets labeled individually, and a unique marker ID gets assigned to the corresponding reconstruction in Motive. These IDs can be monitored in the . To check the marker IDs of respective reconstructions, enable the Marker Labels option under the visual aids (), and the IDs of selected markers will be displayed. The marker IDs assigned to active marker reconstructions are unique, and it can be used to point to a specific marker within many reconstructions in the scene.
Before setting up a motion capture system, choose a suitable setup area and prepare it in order to achieve the best tracking performance. This page highlights some of the considerations to make when preparing the setup area for general tracking applications. Note that this page provides just general recommendations and these could vary depending on the size of a system or purpose of the capture.
First of all, pick a place to set up the capture volume.
Setup Area Size
System setup area depends on the size of the mocap system and how the cameras are positioned. To get a general idea, check out the Build Your Own feature on our website.
Make sure there is plenty of room for setting up the cameras. It is usually beneficial to have extra space in case the system setup needs to be altered. Also, pick an area where there is enough vertical spacing as well. Setting up the cameras at a high elevation is beneficial because it gives wider lines of sight for the cameras, providing a better coverage of the capture volume.
Minimal Foot Traffic
After camera system calibration, the system should remain unaltered in order to maintain the calibration quality. Physical contacts on cameras could change the setup, requiring it to be re-calibrated. To prevent such cases, pick a space where there is only minimal foot traffic.
Flooring
Avoid reflective flooring. The IR lights from the cameras could be reflected by it and interfere with tracking. If this is inevitable, consider covering the floor with surface mats to prevent the reflections.
Avoid flexible or deformable flooring; such flooring can negatively impact your system's calibration.
For the best tracking performance, minimize ambient light interference within the setup area. The motion capture cameras track the markers by detecting reflected infrared light and any extraneous IR lights that exist within the capture volume could interfere with the tracking.
Sunlight: Block any open windows that might let sunlight in. Sunlight contains wavelength within the IR spectrum and could interfere with the cameras.
IR Light sources: Remove any unnecessary lights in IR wavelength range from the capture volume. IR lights could be emitted from sources such as incandescent, halogen, and high-pressure sodium lights or any other IR based devices.
All cameras are equipped with IR filters, so extraneous lights outside of the infrared spectrum (e.g. fluorescent lights) will not interfere with the cameras. IR lights that cannot be removed or blocked from the setup area can be masked in Motive using the Masking Tools during the system calibration. However, this feature completely discards image data within the masked regions and an overuse of it could negatively impact tracking. Thus, it is best to physically remove the object whenever possible.
Dark-colored objects absorb most of the visible light, however, it does not mean that they absorb the IR lights as well. Therefore, the color of the material is not a good way of determining whether an object will be visible within the IR spectrum. Some materials will look dark to human eyes but appear bright white on the IR cameras. If these items are placed within the tracking volume, they could introduce extraneous reconstructions.
Since you already have the IR cameras in hand, use one of the cameras to check whether there are IR white materials within the volume. If there are, move them out of the volume or cover them up.
Remove any unnecessary obstacles out of the capture volume since they could block cameras' view and prevent them from tracking the markers. Leave only the items that are necessary for the capture.
Remove reflective objects nearby or within the setup area since IR illumination from the cameras could be reflected by them. You can also use non-reflective tapes to cover the reflective parts.
Prime 41 and Prime 17W cameras are equipped with powerful IR LED rings which enables tracking outdoors, even under the presence of some extraneous IR lights. The strong illumination from the Prime 41 cameras allows a mocap system to better distinguish marker reflections from extraneous illuminations. System settings and camera placements may need to be adjusted for outdoor tracking applications.
Please read through the Outdoor Tracking Setup page for more information.
USB Cameras are currently not supported in 3.x versions of Motive. The USB camera pages on this wiki are purely for reference only, at this time.
The OptiTrack Duo/Trio tracking bars are factory calibrated and there is no need to calibrate the cameras to use the system. By default, the tracking volume is set at the center origin of the cameras and the axis are oriented so that Z-axis is forward, Y-axis is up, X-axis is left.
If you wish to change the location and orientation of the global axis, you can use the ground plane tools from the Calibration pane and use a Rigid Body or a calibration square to set the global origin.
When using the Duo/Trio tracking bars, you can set the coordinate origin at the desired location and orientation using either a Rigid Body or a calibration square as a reference point. Using a calibration square will allow you to set the origin more accurately. You can also use a custom calibration square to set this.
Adjustig the Coordinate System Steps
First set place the calibration square at the desired origin. If you are using a Rigid Body, its pivot point position and orientation will be used as the reference.
[Motive] Open the Calibration pane.
[Motive] Open the Ground Planes page.
[Motive] Select the type of calibration square that will be used as a reference to set the global origin. Set it to Auto if you are using a calibration square from us. If you are using a Rigid Body, select the Rigid Body option from the drop-down menu. If you are using a custom calibration square, you will need to set the vertical offset also.
[Motive] Select the Calibration square markers or the Rigid Body markers from the Perspective View pane
[Motive] Click Set Set Ground Plane button, and the global origin will be adjusted.
In optical motion capture systems, proper camera placement is very important in order to efficiently utilize the captured images from each camera. Before setting up the cameras, it is good idea to plan ahead and create a blueprint of the camera placement layout. This page highlights the key aspects and tips for efficient camera placements.
A well-arranged camera placement can significantly improve the tracking quality. When tracking markers, 3D coordinates are reconstructed from the 2D views seen by each camera in the system. More specifically, correlated 2D marker positions are triangulated to compute the 3D position of each marker. Thus, having multiple distinct vantages on the target volume is beneficial because it allows wider angles for the triangulation algorithm, which in turn improves the tracking quality. Accordingly, an efficient camera arrangement should have cameras distributed appropriately around the capture volume. By doing so, not only the tracking accuracy will be improved, but uncorrelated rays and marker occlusions will also be prevented. Depending on the type of tracking application, capture volume environment, and the size of a mocap system, proper camera placement layouts may vary.
An ideal camera placement varies depending on the capture application. In order to figure out the best placements for a specific application, a clear understanding of the fundamentals of optical motion capture is necessary.
To calculate 3D marker locations, tracked markers must be simultaneously captured by at least two synchronized cameras in the system. When not enough cameras are capturing the 2D positions, the 3D marker will not be present in the captured data. As a result, the collected marker trajectory will have gaps, and the accuracy of the capture will be reduced. Furthermore, extra effort and time will be required for post-processing the data. Thus, marker visibility throughout the capture is very important for tracking quality, and cameras need to be capturing at diverse vantages so that marker occlusions are minimized.
Depending on captured motion types and volume settings, the instructions for ideal camera arrangement vary. For applications that require tracking markers at low heights, it would be beneficial to have some cameras placed and aimed at low elevations. For applications tracking markers placed strictly on the front of the subject, cameras on the rear won't see those and as a result, become unnecessary. For large volume setups, installing cameras circumnavigating the volume at the highest elevation will maximize camera coverage and the capture volume size. For captures valuing extreme accuracy, it is better to place cameras close to the object so that cameras capture more pixels per marker and more accurately track small changes in their position.
Again, the optimal camera arrangement depends on the purpose and features of the capture application. Plan the camera placement specific to the capture application so that the capability of the provided system is fully utilized. Please contact us if you need consulting with figuring out the optimal camera arrangement.
For common applications of tracking 3D position and orientation of Skeletons and Rigid Bodies, place the cameras on the periphery of the capture volume. This setup typically maximizes the camera overlap and minimizes wasted camera coverage. General tips include the following:
Mount cameras at the desired maximum height of the capture volume.
Distribute the cameras equidistantly around the setup area.
Adjust angles of cameras and aim them towards the target volume.
For cameras with rectangular FOVs, mount the cameras in landscape orientation. In very small setup areas, cameras can be aimed in portrait orientation to increase vertical coverage, but this typically reduces camera overlap, which can reduce marker continuity and data quality.
TIP: For capture setups involving large camera counts, it is useful to separate the capture volume into two or more sections. This reduces amount of computation load for the software.
Around the volume
For common applications tracking a Skeleton or a Rigid Body to obtain the 6 Degrees of Freedom (x,y,z-position and orientation) data, it is beneficial to arrange the cameras around the periphery of the capture volume for tracking markers both in front and back of the subject.
Camera Elevations
For typical motion capture setup, placing cameras at high elevations is recommended. Doing so maximizes the capture coverage in the volume, and also minimizes the chance of subjects bumping into the truss structure which can degrade calibration. Furthermore, when cameras are placed at low elevations and aimed across from one another, the synchronized IR illuminations from each camera will be detected, and will need to be masked from the 2D view.
However, it can be beneficial to place cameras at varying elevations. Doing so will provide more diverse viewing angles from both high and low elevations and can significantly increase the coverage of the volume. The frequency of marker occlusions will be reduced, and the accuracy of detecting the marker elevations will be improved.
Camera to Camera Distance
Separating every camera by a consistent distance is recommended. When cameras are placed in close vicinity, they capture similar images on the tracked subject, and the extra image will not contribute to preventing occlusions nor the reconstruction calculations. This overlap detracts from the benefit of a higher camera count and also doubles the computational load for the calibration process. Moreover, this also increases the chance of marker occlusions because markers will be blocked from multiple views simultaneously whenever obstacles are introduced.
Camera to Object Distance
An ideal distance between a camera and the captured subject also depends on the purpose of the capture. A long distance between the camera and the object gives more camera coverage for larger volume setups. On the other hand, capturing at a short distance will have less camera coverage but the tracking measurements will be more accurate. The cameras lens focus ring may need to be adjusted for close-up tracking applications.
Choosing an appropriate camera mounting solution is very important when setting up a capture volume. A stable setup not only prevents camera damage from unexpected collisions, but it also maintains calibration quality throughout capture. All OptiTrack cameras have ¼-20 UNC Threaded holes – ¼ inch diameter, 20 threads/inch – which is the industry standard for mounting cameras. Before planning the mount structures, make sure that you have optimized your camera placement plans.
Due to thermal expansion issues when mounted to walls, we recommend using Trusses or Tripods as primary mounting structures.
Trusses will offer the most stability and are less prone to unwanted camera movement for more accurate tracking.
Tripods alternatively, offer more mobility to change the capture volume.
Wall Mounts and Speed Rails offer the ability to maximize space, but are the most susceptible to vibration from HVAC systems, thermal expansion, earthquake resistant buildings, etc. This vibration can cause inaccurate calibration and tracking.
Camera clamps are used to fasten cameras onto stable mounting structures, such as a truss system, wall mounts, speed rails, or large tripods. There are some considerations when choosing a clamp for each camera. Most importantly, the clamps need to be able to bear the camera weight. Also, we recommend using clamps that offer adjustment of all 3 degrees of orientation: pitch, yaw, and roll. The stability of your mounting structure and the placement of each camera is very important for the quality of the mocap data, and as such we recommend using one of the mounting structures suggested in this page.
Here at OptiTrack, we recommend and provide Manfrotto clamps that have been tested and verified to ensure a solid hold on cameras and mounting structures. If you would like more information regarding Manfrotto clamps, please visit our Mounts and Tripods page on our website or reach out to our Sales team.
Manfrotto clamps come in three parts:
Manfrotto 035 Super Clamp
Manfrotto 056 3-Way, Pan-and-Tilt Head with 1/4"-20 Mount
Reversible Short Brass Stud
For proper assembly, please follow the steps below:
Place the brass stud into the 16mm hexagon socket in the Manfrotto Super Clamp.
Depress the spring-loaded button so the brass stud will lock into place.
Tighten the safety pin mechanism to secure the brass stud within the hexagon socket. Be sure that the 3/8″ screw (larger) end of the stud is facing out.
From here, attach the Super Clamp to the 3-Way, Pan-and-Tilt Head by screwing in the brass stud into the screw hole of the 3-Way, Pan-and-Tilt Head.
Be sure to tighten these two components fairly tight as you don't want them to swivel when installing cameras. It helps to first tighten the 360° swivel on the 3-Way, Pan-and-Tilt Head as this will ensure that any unwanted swivel will not occur when tightening the two components together.
Once, these two components are attached you should have a fully functioning clamp to attach your cameras to.
Large scale mounting structures, such as trusses and wall mounts, are the most stable and can be used to reliably cover larger volumes. Cameras are well-fixed and the need for recalibration is reduced. However, they are not easily portable and cannot be easily adjusted. On the other hand, smaller mounting structures, such as tripods and C-clamps, are more portable, simple to setup, and can be easily adjusted if needed. However, they are less stable and more vulnerable to external impacts, which can distort the camera position and the calibration. Choosing your mounting structure depends on the capture environment, the size of the volume, and the purpose of capture. You can use a combination of both methods as needed for unique applications.
Choosing an appropriate structure is critical in preparing the capture volume, and we recommend our customers consult our Sales Engineers for planning a layout for the camera mount setup.
A truss system provides a sturdy structure and a customizable layout that can cover diverse capture volume sizes, ranging from a small volume to a very large volume. Cameras are mounted on the truss beam using the camera clamps.
Consult with the truss system provider or our Sales Engineers for setting up the truss system.
Follow the truss installation instruction and assemble the trusses on-site, and use the fastening pins to secure each truss segment.
Fasten the base truss to the ground.
Connect each of the segments and fix them by inserting a fastening pin.
Attach clamps to the cameras.
Mount the clamps to the truss beam.
Aim each camera.
Tripods are portable and simple to install, and they are not restricted to the environment constraints. There are various sizes and types of tripods for different applications. In order to ensure its stability, each tripod needs to be installed on a hard surface (e.g. concrete). Usually, one camera is attached per tripod, but camera clamps can be used in combination to fasten multiple cameras along the leg as long as the tripod is stable enough to bear the weight. Note that tripod setups are less stable and vulnerable to physical impacts. Any camera movements after calibration will distort the calibration quality, and the volume will need to be re-calibrated.
Wall mounts and speed rails are used with camera clamps to mount the cameras along the wall of the capture volume. This setup is very stable, and it has a low chance of getting interfered with by way of physical contact. The capture volume size and layout will depend on the size of the room. However, note that the wall, or the building itself, may slightly fluctuate due to the changing ambient temperature throughout the day. Therefore, you may need to routinely re-calibrate the volume if you are looking for precise measurements.
Below are recommended steps when installing speed rails onto different types of wall material. However, depending on your space, you may require alternative methods.
Although we have instructions below for installing speed rails, we highly recommend leaving the installation to qualified contractors.
General Tools
Cordless drill
Socket driver bits for drill
Various drill bits
Hex head Allen wrench set
Laser level
Speed Rail Parts
Pre-cut rails
Internal locking splice
5" offset wall mount bracket
End caps (should already be pre-installed onto pipes)
Elbow speed rail bracket (optional)
Tee speed rail bracket (optional)
Wood Stud Setup
Wood frame studs behind drywall requires:
Pre-drilled holes.
2 1/2" long x 5/16" hex head wood lag screws.
Metal Stud Framing Setup
Metal stud framing behind drywall requires:
Undersized pre-drilled holes as a marker in the drywall.
2"long x 5/16" self tapping metal screws with hex head.
Metal studs can strip easily if pre-drilled hole is too large.
Concrete Block/Wall Setup
Requires:
Pre-drilled holes.
Concrete anchors inserted into pre-drilled hole.
2 1/2" concrete lags.
Concrete anchors and lags must match for a proper fit.
It's easiest and safest to install with another person rather than installing by a single person and especially necessary when rails have been pre-inserted into brackets prior to installing on a wall.
Pre-drill bracket locations.
If working in a smaller space, slip speed rails into brackets prior to installing.
Install all brackets by the top lag first.
Check to see if all are correctly spaced and level.
Install bottom lags.
Slip speed rails into brackets.
Set screw and internal locking splice of speed rail.
Attach clamps to the cameras.
Attach the clamps to the rail.
Aim each camera.
Helpful Tips/Additional Information
The 5" offset wall brackets should not exceed 4' between each bracket.
Speed rails are shipped no longer than 8'.
Using blue painter's tape is a simple way to mark placement without messing up paint.
Make sure to slide the end of the speed rail without the end cap in first. If installed with the end-cap end first it will "mushroom" the end and make it difficult to slip brackets onto the speed rail.
Check brackets for any burs/sharpness and gently sand off to avoid the bracket scratching the finish on the speed rail.
To further reduce the bracket scratching the finish on the speed rail, use a piece of paper inside the bracket prior to sliding the speed rail through.
Notes on USB camera models
When enabled, the Broadcast Storm Control feature on the NETGEAR ProSafe GSM7228S may interfere with the synchronization mechanism used by OptiTrack Ethernet cameras. For proper system operations, the Strom Control features must be disabled for all of the ports used in this aggregator switch.
Step 1. Access the IPv4 settings on the network card that the camera network is connected to.
On windows, open the Network and Sharing Center and access Change adaptor settings.
Right-click on the adapter that the network switch is connected to and access its properties.
Among the list of items, select the Internet Protocol Version 4 (TCP/IPv4) and access its properties by clicking the Properties button.
Step 2. Make a note of the IP address settings for the network card connected to the switch.
Step 3. Change the IP address of the network card connected to the switch to 169.254.100.200. As shown below.
Step 4. Open windows explorer, and access 169.254.100.100
Step 5. Log into the switch with Username 'admin', and leave Password blank
Step 6. Navigate to Security->Traffic Control->Storm Control->Storm Control Global Configuration
Step 7. Ensure that all storm control options are disabled
Step 8. Navigate to Maintenance->Save Config->Save Configuration
Step 9. Check the 'Save Configuration' check box
Step 10. Log out of the switch, or just close the browser window
Step 11. Restore the IP address settings noted in Step 2 for the network card connected to the switch
This page provides instructions on how to configure the CameraNicFilter.xml file to whitelist or blacklist specific cameras from the connected camera network.
Starting with Motive 2.1, you can specify which cameras to utilize among the connected Ethernet cameras in a system. This can be done by setting up an XML file (CameraNicFilter.xml) and placing it in Motive's ProgramData directory: C:\ProgramData\OptiTrack\Motive\CameraNicFilter.xml. Once this is set, Motive will initialize only the specified cameras within the respective network interface. This allows users to distribute the cameras to specific network interfaces on a computer or on multiple computers.
Additional Note:
This filter works with Ethernet camera systems only. USB camera systems are not supported.
At the time of writing, the eSync is NOT supported. In other words, the eSync cannot be present in the system in order for the filter to work properly.
For common applications, there is usually no need to separate the cameras to different network interfaces. However, there are few situations where you may want to use this filter to segregate the cameras. Below are some of the sample applications of the filters:
Multiple Prime Color cameras
When there are multiple Prime Color cameras in a setup, you can configure the filter to spread out the data load. In other words, you can uplink color camera data through a separate network interface (NIC) and distribute the data traffic to prevent any bandwidth bottleneck. To accomplish this, multiple NICs must be present on the host computer, and you can distribute the data and uplink them onto different interfaces.
Active marker tracking on multiple capture volumes
To separate the cameras, you will need to use a text editor to create an XML file named CameraNicfilter.xml. In this XML file, you will specify which cameras to whitelist or blacklist within the connected network interface. Please note that it is very important for the XML file to match the expected format; for this reason, we strongly recommend to first copy-and-paste the template and start from there.
Attached below is a basic template of the CameraNicFilter.xml file. On each NIC element, you can specify each network interface using the IPAddress attribute, and then in its child elements, you can specifically set which cameras to whitelist or blacklist using their serial numbers.
For each network interface that you will be using to communicate with the cameras, you will need to create a <NIC> element and assign a network IP address (IPv4) to its IPAddress attribute. Then, under each NIC element, you can specify which cameras to use or not to use.
Please make sure correct IP addresses are assigned when configuring the NIC element. Run the ipconfig command on the windows command prompt to list out the assigned IP addresses of the available networks on the computer and then use the IPv4 address of the network that you wish to use. When necessary, you can also set a static IP address for the network interface and use a known address value for easier setup.
Under the NIC element, define two child elements: <Whitelist> and <Blacklist>. In each element, you will be specifying the cameras using their serial numbers. Within each network interface, only the cameras listed under the <Whitelist> element will be used and all of the cameras under <Blacklist> will be ignored.
As shown in the above template, you can specify which cameras to whitelist or blacklist using the corresponding camera serial numbers. For example, you can use the following to specify the camera (M18883) <Serial>M18883</Serial>
. You can also use a partial serial number as a wildcard to specify all cameras with the matching serial number. For example, if you wish to blacklist all Color cameras in a network (192.168.1.3), you can use C as the wildcard serial number since the serial number of all color cameras start with C.
Once the XMl file is configured, please save the file in the ProgramData directory: C:\ProgramData\OptiTrack\Motive\CameraNicFilter.xml
. If everything is set up properly, only the whitelisted cameras under each network interface will get initialized in Motive, and the data from only the specified cameras will be uplinked through the respective network interface.
Below are a couple of diagrams to properly setup your network. These setups are strongly advised and have been tested for optimal use and safety.
Ethernet Camera Models: PrimeX series and SlimX 13 cameras. Follow the below wiring diagram and connect each of the required system components.
OptiTrack’s Ethernet cameras require PoE or PoE+ Gigabit Ethernet switches, depending on the camera's power requirement. The switch serves two functions: transfer camera data to a host PC, and supply power to each camera over the Ethernet cable (PoE).
The switch must provide consistent power to every port simultaneously in order to power each camera. Standard PoE switches must provide a full 15.4 watts to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color cameras have stronger IR strobes which require higher power for the maximum performance.
In this case, these cameras need to be routed through PoE+ switches that provide a full 30 watts of power to each port simultaneously. Note that PoE Midspan devices or power injectors are not suitable for Ethernet camera systems.
The following is generally used for large PoE+ camera setups with multiple camera switches. Please refer to the Switch Power Budget and Camera Power Requirements tab above for more information.
Some switches are only allotted a power budget smaller than what is needed depending on which OptiTrack cameras are being used. For larger camera setups this can cause multiple switches that can only use a portion of their available ports. In this case, we recommend an Redundant Power System (RPS) to extend the power budget of your switch. For example, a 24-port switch may have a 370W power budget which only supports 12 PoE+ cameras that require 30W to power. If, however, you have the same 24-port switch with a RPS, you can now power all 24 PoE+ cameras with a 30W power requirement utilizing all 24 of the PoE ports on the switch.
The eSync is used to enable synchronization and timecode in Ethernet-based mocap systems. Only one device is needed per system, and it enables you to link the system to almost any signal source. It has multiple synchronization ports which allow integrating external signals from other devices. When an eSync is used, it is considered as the master in the synchronization chain.
With large camera system setups, you should connect the eSync onto the aggregator switch via a standard Ethernet port for more stable camera synchronization. If PoE is not supported on the aggregator switch, the sync hub will need to be powered separately from a power outlet.
Then, open up the Status Log panel and check there are no 2D frame drops. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should only occur just momentarily. If the system continues to drop 2D frames, it indicates there is a problem with how the system is delivering the camera data. Please refer to the troubleshooting section for more details.
An Ethernet camera system networks via Ethernet cables. Ethernet-based camera models include PrimeX series (PrimeX 13, 13W, 22, 41), SlimX 13, and Prime Color models. Ethernet cables not only offer faster data transfer rates, but they also provide power over Ethernet to each camera while transferring the data to the host PC. This reduces the number of required cables and simplifies the overall setup. Furthermore, Ethernet cables have much longer length capability (up to 100m), allowing the systems to cover large volumes.
Host PC with an isolated network (PCI/e NIC)
Ethernet Cameras
Ethernet cables
Ethernet PoE/PoE+ Switches
Uplink switch (for large camera count setup)
The eSync (optional for synchronizations)
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, Cat6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above — are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for high data traffic.
Note
Electromagnetic Shielding
We recommend using only cables that have electromagnetic interference shielding. If unshielded cables are used, cables in close proximity to each other have the potential to create data transfer interference and cause cameras to stall in Motive.
Unshielded cables do not protect the cameras from Electrostatic Discharge (ESD), which can damage the camera. Do not use unshielded cables in environments where ESD exposure is a risk.
Our current general standard for network switches are:
PoE ports with at least 1GB of data transfer for each port.
If you have a switch that is not purchased from OptiTrack, these are not supported by our support team.
You'll want to remove as much bloatware from your PC in order to optimize your system and make sure minimal unnecessary background processes are running. Background process can take up valuable CPU resources from Motive and cause frame drops while running your camera system.
There are many external resources in order to remove unused apps and halt unnecessary background processes, so they will not be covered within the scope of this page.
As a general rule for all OptiTrack camera systems, you'll want to disable all Windows firewalls and either disable or remove any Antivirus software. If firewalls and Antivirus software is enabled, this will cause frame drops while running your camera system.
In order for Motive to run above other processes, you'll need to change the Priority of Motive.exe to High.
Right Click on the Motive shortcut from your Desktop
In the Target: text field enter the below path, this will allow Motive to run at High Priority that will persist from closing and reopening Motive.
C:\Windows\System32\cmd.exe /C start "" /high "C:\Program Files\OptiTrack\Motive\Motive.exe"
Please refrain from setting the priority to Realtime. If Realtime is selected, this can cause loss of input control (mouse, keyboard, etc.) since Windows can prioritize Motive above input processes.
If you're running a system with a CPU with a lower core count, you may need to disable Motive from running on a couple of cores. This will help stabilize the overall system and free up some cores for other Windows required processes.
From the Task Manager, navigate to the Details tab and right click on Motive.exe
Select Set Affinity
From this window, uncheck the cores you wish to disallow Motive.exe to run on.
Click OK
Please note that you should only ever disable 2 cores or less to insure Motive still runs smoothly.
We recommend that you start with only one core and work your way up to two if you're still experiencing frame drop issues with your camera system.
Your Network Interface Card has a few settings that can change in order to optimize your system.
To navigate to the camera network's NIC:
Open Windows Settings
Select Ethernet from the navigation sidebar
Under Related settings select Change adapter options
From the Network Connections pop up window, right click on your NIC and select Properites
Select the Configure... button and navigate to the Advanced tab
For the Speed and Duplex property, you'll want to change this to the highest throughput of your NIC. If you have a 10Gbps NIC, you'll want to make sure that 10Gbps Full Duplex is selected. This property allows the NIC to operate at it's full range. If this setting is not altered to Full, Windows has the tendency to throttle the NIC throughput causing a 10Gbps NIC to only be sending data at 2Gbps.
Interrupt Moderation allows the NIC to moderate interrupts. When there is a significant amount of data being uplinked to Motive, this can cause more interrupts to occur thus hindering the system performance. You'll want to Disable this property.
After the above properties have been applied, the NIC will need to go through a reboot process. This process is automatic, however, it will make it appear that your camera network is down for a few minutes. This is normal and once the NIC is rebooted, should begin to work as expected.
Although not recommended, you may use a laptop PC to run a larger or Prime Color Camera system. When using a laptop PC, you'll need to use an external network adapter for. The above settings will typically not apply to these types of adapters, so no properties will need to changed.
It is important to use a Thunderbolt port adapter with corresponding Thunderbolt ports on your laptop as opposed to a standard USB-C adapters/ports.
USB camera models, including Flex series cameras and V120:Duo/Trio tracking bars, are currently not supported in Motive 3.x versions. For those systems, please refer to the .
For , this filter can be used to distribute the cameras to different host computers. By doing so, you can segregate the cameras into multiple capture volumes and have them share the same connected BaseStation. This could be beneficial for VR applications especially if you plan on having multiple volumes to calibrate because you can use the same active components between different volumes.
Connect PoE Switch(s) into the Host PC: Start by connecting a PoE switch into the host PC via an Ethernet cable. Since the camera system takes up a large amount of data bandwidth, the Ethernet camera network traffic must be separated from the office/local area network. If the computer used for capture is connected to an existing network, you will need to use a second Ethernet port or add-on network card for connecting the computer to the camera network. When you do, make sure to turn off your computer's firewall for the particular network under Windows Firewall settings.
Connect the Ethernet Cameras to the PoE Switch(s): Ethernet cameras connect to the host PC via PoE/PoE+ switches using Cat 6, or above, Ethernet cables.
Power the Switches: The switch must be powered in order to power the cameras. To completely shut down the camera system, the network switch needs to be powered off.
Ethernet Cables: Ethernet cable connection is subject to the limitations of the PoE (Power over Ethernet) and Ethernet communications standards, meaning that the distance between camera and switch can go up to about 100 meters when using Cat 6 cables (Ethernet cable type Cat5e or below is not supported). For best performance, do not connect devices other than the computer to the camera network. Add-on network cards should be installed if additional Ethernet ports are required.
On smaller systems you may not need to use the SFP ports to uplink your data. The SFP port on the switch with the SFP module provided by OptiTrack are specific for heavily loaded systems (i.e. larger camera counts, Prime Color Camera systems).
Ethernet Cable Requirements
Cable Type
There are multiple categories for Ethernet cables, and each has different specifications for maximum data transmission rate and cable length. For an Ethernet based system, category 6 or above Gigabit Ethernet cables should be used. 10 Gigabit Ethernet cables – Cat6a or above— are recommended in conjunction with a 10 Gigabit uplink switch for the connection between the uplink switch and the host PC in order to accommodate for the high data traffic. A 10GB uplink and NIC are recommended for multi-switch setups or when using Prime Color cameras.
Electromagnetic Shielding
Also, please use a cable that has electromagnetic interference shielding on it. If cables without the shielding are used, cables that are close to each other could interfere and cause the cameras to drop frames in Motive.
External Sync: If you wish to connect external devices, use the eSync synchronization hub. Connect the eSync into one of the PoE switches using an Ethernet cable, or if you have a multi-switch setup, plug the eSync into the aggregation switch.
Uplink Switch: For systems with higher camera counts that uses multiple PoE switches, use an uplink Ethernet switch to link and connect all of the switches to the Host PC. In the end, the switches must be connected in a star topology with the uplink switch at the central node connecting to the host PC. NEVER daisy chain multiple PoE switches in series because doing so can introduce latency to the system.
High Camera Counts: For setting up more than 24 Prime series cameras, we recommend using a 10 Gigabit uplink switch and connecting it to the host PC via an Ethernet cable that supports 10 Gigabit transfer rate — Cat6a or above. This will provide larger data bandwidth and reduce the data transfer latency.
At this point, all of the connected cameras will be listed on the and the when you start up Motive. Check to make sure all of the connected cameras are properly listed in Motive.
This page provides the general specifications for an OptiTrack camera setup. Please see our and pages for more detailed instructions on how to setup your Ethernet camera system.
10Gb uplink switches, NICs, and cables are recommended for large camera counts or high data cameras like the Prime Color cameras. Typically 1Gb switches, NICs, and cables should be enough to accommodate smaller and moderately sized systems. If you're unsure of whether or not you need more than 1Gb, please contact one of our Sales Engineers or see our page for more information.
A power budget that is able to support the desired amount of cameras. If the desired amount of cameras exceeds the power budget of a single switch, additional switches may be used. Please see the section below for more information.
For specific brands/models of switches, please .
For the most part, the switches provided by OptiTrack are ready to go without any need for additional settings or configurations. If you're having issues with setting up your switches provided by OptiTrack please see the Cabling and Load Balancing section below or contact our .
A: 2D frame drops are logged under the and it can also be seen in the . It will be indicated with a warning sign () next to the corresponding camera. You may see a few frame drops when booting up the system or when switching between Live and Edit modes; however, this should occur only momentarily. If the system continues to drop 2D frames, it means there is a problem with receiving the camera data. In many cases, this occurs due to networking problems.
To narrow down the issue, you would want to disable the and check if the frames are still dropping. If it stops, the problem is associated with either software configurations or CPU processing. If it continues to drop, then the problem could be narrowed down to the network configuration, which may be resolved by doing the following:
The settings below are generally for larger camera setups and Prime Color camera setups. Typically, smaller systems will not need to use the settings below. When in doubt, please reach out to our team.
In most cases your switch settings will not be required to be altered. However, if your switch has built in , you'll want to disable this feature.
In the event that SFP ports are NOT used, please use one of the standard Ethernet ports on your switch to uplink data to Motive. If you're unsure if you'll require to use the SFP port and SFP module, please reach out to either our or teams.
PoE switch requirement: The PoE switches must be able to provide 15.4W power to every port simultaneously. PrimeX 41, PrimeX 22, and Prime Color camera models run on a high power mode to achieve longer tracking ranges, and they require 30W of power from each port. If you wish to operate these cameras at standard PoE mode, set the setting to false under the application settings. For network switches provided by OptiTrack, refer to the label for the number of cameras supported for each switch.