Advances in Markerless Motion Capture Systems: A Review of OpenCap and Its Applications

Article information

Asian J Kinesiol. 2024;26(4):42-47
Publication date (electronic) : 2024 October 31
doi : https://doi.org/10.15758/ajk.2024.26.4.42
Division of Kinesiology and Sport Management, University of South Dakota, Vermillion, SD, USA
*Correspondence: Hyung Suk Yang, PhD, Division of Kinesiology and Sport Management, University of South Dakota, 414 E. Clark St.Vermillion, SD, USA 57069; Tel: 001-605-658-5626; E-mail: HS.Yang@usd.edu
Received 2024 September 5; Accepted 2024 October 10.

Abstract

Motion capture technology has long been a cornerstone of biomechanical research and analysis, traditionally relying on marker-based systems. However, these systems have inherent limitations, including high costs, time-consuming setup, and constraints on natural movement. Recent advancements in computer vision and machine learning have paved the way for markerless motion capture systems, such as OpenCap, which offer a more accessible and natural approach to biomechanical analysis. This review focuses on OpenCap, exploring its accuracy, challenges, and future potential, while comparing it with traditional motion capture technologies and discussing its applications in sports, clinical rehabilitation, and everyday use. Areas in need of refinement, such as improving pose estimation algorithms and addressing inter-trial variability, are identified as key future research directions. Other markerless systems are also compared in terms of advantages, limitations, and application. The findings suggest that while OpenCap and similar systems hold significant promise, further research and refinement are necessary to fully realize their potential and integrate them seamlessly into various biomechanical applications.

Introduction

Motion capture systems have been integral to the field of biomechanics for many years. They provide essential insights into human movement for clinical assessments and sport performance evaluations. Traditional marker-based systems have been the gold standard for capturing kinematic data with high accuracy. However, these systems require specialized equipment, involve extensive setup procedures, and may affect natural movement due to the attachment of markers on the body [1,2]. In response to these challenges, markerless motion capture systems have emerged, utilizing advancements in computer vision, machine learning, and artificial intelligence [3]. OpenCap (Standford, USA) is an innovative, open-source, web-based motion capture system that offers the research community free access to advanced biomechanical analysis [4]. It enables the estimation of 3D kinematics and dynamics of human movement from videos recorded with at least two smartphones. As markerless systems like OpenCap continue to develop, this review also examines other systems to provide a comprehensive overview of advancements and challenges in the field.

OpenCap: A Markerless Motion Capture System

Overview and Capabilities

OpenCap marks a major advancement in motion capture technology, employing sophisticated algorithms and multiple camera angles to monitor and analyze human movement without the need for the cumbersome physical markers used in traditional systems. It uses pose estimation algorithms to detect body landmarks from videos recorded with two or more smartphones and applies deep learning and biomechanical models to calculate three-dimensional kinematics and dynamics [4]. The system’s ability to predict dynamic measures such as muscle activations, joint loads, and joint moments enables its use in screening for disease risk, evaluating intervention efficacy, and informing rehabilitation decisions [4,5]. OpenCap’s accessibility and ease of use position it as a promising tool for large-scale human movement studies outside of traditional laboratory environments [4]. By embracing an open-source framework, OpenCap invites continuous innovation and customization, making it highly adaptable to the diverse and evolving needs of biomechanical research and clinical practice [4]. Additionally, OpenCap’s use of smartphones offers a more flexible, low-cost alternative, making it accessible for studies outside of traditional labs, which distinguishes it from more complex setups requiring extensive equipment.

Validation and Accuracy

Accuracy is a pivotal criterion in evaluating OpenCap, especially in comparison to the well-established precision of traditional marker-based systems. OpenCap has demonstrated a rotational mean absolute error (MAE) of 4.5 degrees and a translational MAE of 12.3 mm across various movements, including walking, squatting, sit-to-stand, and drop jump, compared to a traditional marker-based system [4]. Further research extended this validation to assess concurrent accuracy and inter-trial variability in pathological gait patterns, revealing an average root mean square error (RMSE) of 5.8 degrees and greater variation in lower extremity kinematics across walking trials compared to a marker-based motion capture system [6,7]. Additionally, OpenCap has shown a MAE of 3.85 degrees and a RMSE of 4.34 degrees across lower extremity joint angles during common return-to-sport tasks involving hopping and landing techniques [5]. However, traditional systems, which typically achieve RMSEs of less than 2.0 mm for moving markers and 1.0 mm for stationary markers, are recognized for their superior spatial accuracy [8].

Error values from bone-pin and radiographic systems support these findings [9-14]. Discrepancies between marker-based systems and direct measurements have been reported, with RMS differences for marker positions ranging from 1.06 mm to 8.31 mm depending on the task and plane of motion [13]. Specifically, sagittal plane errors are smaller, with 2.45 degrees for walking and 2.67 degrees for running, while transverse plane errors are larger, reaching up to 4.70 degrees for running. Surface movement errors of up to 10.5 mm in knee joint center locations and leg rotational errors of up to 8 degrees for joint kinematics have been observed [11]. These limitations affect the validation of markerless systems like OpenCap, which are typically benchmarked against traditional systems.

Challenges in traditional systems include marker placement on anatomical landmarks and variability due to day-to-day differences and inter-tester inconsistencies [9,15]. In complex movements, slight discrepancies in marker placement significantly affect data accuracy [10]. Markerless systems like Open-Cap may reduce these issues by eliminating the need for physical markers, minimizing errors introduced by human variability.

Challenges and Future Directions

While OpenCap eliminates the need for physical markers, offering a less intrusive method of motion capture, it faces challenges such as increased inter-trial variability. This variability necessitates improved data processing to achieve levels of accuracy comparable to traditional systems [6,7]. This has important implications for post-processing, where averaging waveforms is recommended to mitigate the effects of this variability, as opposed to relying on single trials that may lead to misleading conclusions. Additionally, while calibrationless monocular vision systems have been investigated as simpler alternatives for motion capture, they generally result in larger errors compared to multi-camera systems like OpenCap, indicating they may not yet be suitable replacements in applications requiring high precision [16].

OpenCap’s preset video capture configuration, which utilizes a sampling frequency of 60 Hz and a resolution of 1280 x 720, is adequate for many biomechanical analyses, particularly those involving slower movements. However, it may not be as effective for capturing fast movements where higher frame rates would provide better accuracy. Future updates aimed at increasing the frame rate could enhance OpenCap’s performance in more dynamic contexts.

These findings suggest that while OpenCap provides reasonably accurate kinematic data across various movements, its precision is somewhat lower than that of traditional marker-based systems, particularly in dynamic activities. This variability in accuracy should be carefully considered when using OpenCap in settings where precise kinematic measurements are critical. Future research should focus on refining the algorithms used for pose estimation and enhancing data processing techniques specifically within markerless systems. Hybrid systems that combine markerless and marker-based technologies might offer potential advantages by leveraging the strengths of both approaches, though they could also introduce complexities, such as longer setup times and calibration challenges. For instance, hybrid systems using a Time of Flight camera, high-speed vision sensors, and inertial measurement units have been successfully applied to skiing and snowboarding, providing highly accurate real-time motion capture in dynamic outdoor environments [17,18]. Hybrid systems such as these offer advantages in complex, dynamic environments where traditional marker-based systems face limitations, including marker occlusion or environmental interference. Recent research demonstrated the effective use of hybrid coregistration of 2D video and 3D marker data in tasks where placing markers is impractical, such as juggling, highlighting the adaptability of these systems in different motion capture applications [19]. However, the increased complexity in setup, calibration, and data synchronization between systems poses challenges that must be managed. For example, while the accuracy of joint kinematics in these hybrid setups is high, aligning data from both marker-based and markerless sources requires meticulous calibration, which can offset the usability benefits gained from the markerless components. Advancing markerless technology alone may provide a more straightforward path to reducing errors and improving the reliability of data from these systems. Specifically, research could focus on developing more robust algorithms for dynamic activities or increasing sampling frequencies to capture rapid movements with greater precision.

The trajectory of OpenCap and similar markerless systems is promising, driven by continuous advancements in AI, computer vision, and sensor technology. The field must now address key challenges: reducing variability and determining the future role of hybrid systems in motion capture. These systems are expected to become increasingly accurate, user-friendly, and accessible [20]. Integrating real-time feedback and augmented reality could further enhance their utility in sport training and rehabilitation. However, addressing inter-trial variability through improved algorithms and data processing techniques will be crucial for broader adoption in clinical and sport settings [7]. Continued development of deep learning models and the expansion of databases to cover a broader range of populations and activities could have a significant impact on both research and clinical practices. OpenCap’s potential to democratize access to high-quality motion analysis could revolutionize biomechanical assessments [4].

Other Markerless Motion Capture Systems

While OpenCap represents a significant advancement in markerless motion capture, it is not the only system contributing to the development of motion analysis without the need for markers. Other systems, such as Theia3D (Theia Markerless, Canada), DeepLabCut (Mathis Lab, Switzerland), and Captury Live (The Captury, Germany), offer alternatives with different approaches to accuracy, usability, and real-world applications. Each system involves a trade-off between the number of cameras used and the complexity of the calibration process, which directly affects the balance between accuracy and ease of use.

Theia3D offers a highly accurate markerless motion capture system, with root mean square differences (RMSD) between joint centers of less than 2.5 cm for most joints when compared to traditional marker-based systems [20]. It excels in capturing spatiotemporal gait parameters and lower limb kinematics, particularly in the sagittal and frontal planes, with high agreement for joint angles such as flexion and extension [21]. However, challenges remain in measuring transverse plane movements, such as pelvic tilt and internal/external rotations, where RMSD can reach up to 13 degrees [20]. The system is particularly suited for clinical research due to its high accuracy and multi-camera setup, though the complexity of its setup and associated costs may present barriers for some users.

DeepLabCut (DLC) is an open-source, deep-learning-based markerless system designed for animal and human motion tracking [22]. Unlike OpenCap, which uses a multi-camera setup, DLC can perform effectively with a single camera after being trained on labeled data, providing flexibility for small-scale studies [23]. DLC’s application across diverse species and behaviors is made possible by its transfer learning approach, which allows for human-level accuracy with as few as 200 labeled images [22,23]. However, additional manual labeling may be required when the system encounters movements or subjects that deviate significantly from the training data, which may limit its generalizability. While DLC works well in simple or controlled environments, its performance tends to decline in dynamic or complex scenarios, particularly with occlusions or high-speed movements [22,23]. Its flexible camera setup and open-source platform make it accessible to a wide range of users, though the manual effort required for training can be time-consuming, especially for diverse or unstructured datasets.

Captury Live is a markerless motion capture system used in various fields, including biomechanics. It employs multiple cameras to deliver immediate feedback, making it especially useful for capturing real-time dynamic movements. However, its precision in recording joint angles and kinematics is generally lower than that of systems like OpenCap or Theia3D, particularly during complex movements. Studies have shown that while Captury Live reliably estimates variables such as jump length and separation distance ratios, its accuracy decreases for more detailed measurements, such as knee flexion or during dynamic movements like squats and jumps, especially when the knees are fully extended [24,25]. The system’s focus on real-time feedback and ease of use makes it well-suited for dynamic settings where immediate results are prioritized, though its lower accuracy compared to other systems may limit its use in detailed biomechanical analyses [24,25].

Applications in Biomechanics

In sport biomechanics, OpenCap has emerged as a valuable tool, particularly for analyzing athletic performance in environments that closely mimic real-world conditions, free from the constraints of traditional marker-based setups. By eliminating the need for cumbersome markers, OpenCap allows for the capture of kinematics and kinetics, providing data that is potentially more natural and accurate. In clinical settings, OpenCap offers a non-invasive method for assessing and monitoring patients. The system is increasingly being used in rehabilitation to track patient progress, assess movement deficiencies, and tailor interventions [4]. The ability to capture motion data without markers is especially beneficial for patients who may be uncomfortable with or physically unable to wear traditional markers [5].

However, OpenCap’s simplified modeling approach presents some limitations. It employs a 1-degree of freedom (DOF) model for the knee and a 2-DOF model for the ankle, which differs from the more complex 3-DOF models typically used in traditional marker-based systems. This simplification also means that a key clinical variable, knee valgus, is not captured by OpenCap’s default settings [4,5]. These limitations could restrict the system’s application in scenarios where detailed joint motion analysis or specific clinical assessments, such as knee valgus, are critical [26-28].

One of the most promising aspects of OpenCap is its potential for everyday use. Individuals can perform biomechanical assessments in their own homes using just a few cameras or even smartphones [4]. This accessibility opens new possibilities for fitness tracking, ergonomic assessments, and home rehabilitation, making advanced biomechanical analysis more widely available [3].

For large-scale studies and in-field applications, managing the increased inter-trial variability is crucial. Automated techniques for detecting and eliminating outliers before averaging data can improve the robustness of analyses, making OpenCap more viable for widespread use [7]. Additionally, OpenCap’s reliance on cloud computing for data processing enhances its accessibility, enabling users to collect, analyze, and visualize movement data without the need for specialized hardware or software [4].

Conclusions

OpenCap represents a significant advancement in biomechanics, offering a more natural, accessible, and scalable alternative to traditional marker-based systems. While the increased inter-trial variability observed in OpenCap presents challenges that must be addressed through careful data processing and algorithmic improvements, the system holds great promise for revolutionizing the study and understanding of human movement. The omission of critical clinical variables such as knee valgus, in OpenCap’s default settings, and its simplified modeling approach for joint motion, are notable limitations that could restrict its application in certain clinical and research settings. However, as OpenCap and similar technologies continue to evolve, their potential to democratize access to high-quality motion analysis is immense. These advancements could transform biomechanics research and clinical practice, paving the way for more personalized, accessible, and effective assessments of human movement. The future of motion capture technology is promising, and further innovations in this space are likely to significantly enhance our ability to analyze and understand complex human movements.

Notes

The authors declare no conflict of interest.

References

1. Baker R. Gait analysis methods in rehabilitation. J Neuroeng Rehabil 2006;3:4.
2. Richards J. Biomechanics in Clinic and Research: An Interactive Teaching and Learning Course Churchill Livingstone/Elsevier; 2008.
3. Colyer SL, Evans M, Cosker DP, Salo AIT. A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system. Sports Med Open 2018;4(1):24.
4. Uhlrich SD, Falisse A, Kidziński Ł, et al. OpenCap: Human movement dynamics from smartphone videos. PLoS Comput Biol 2023;19(10)e1011462.
5. Turner JA, Chaaban CR, Padua DA. Validation of OpenCap: A low-cost markerless motion capture system for lower-extremity kinematics during return-to-sport tasks. J Biomech 2024;171:112200.
6. Horsak B, Eichmann A, Lauer K, et al. Concurrent validity of smartphone-based markerless motion capturing to quantify lower-limb joint kinematics in healthy and pathological gait. J Biomech 2023;159:111801.
7. Horsak B, Prock K, Krondorfer P, Siragy T, Simonlehner M, Dumphart B. Inter-trial variability is higher in 3D markerless compared to marker-based motion capture: Implications for data post-processing and analysis. J Biomech 2024;166:112049.
8. Richards JG. The measurement of human motion: A comparison of commercially available systems. Hum Mov Sci 1999;18(5):589–602.
9. Cappozzo A, Catani F, Leardini A, Benedetti MG, Croce UD. Position and orientation in space of bones during movement: experimental artefacts. Clin Biomech (Bristol, Avon) 1996;11(2):90–100.
10. Reinschmidt C, Van Den Bogert A, Lundberg A, et al. Tibiofemoral and tibiocalcaneal motion during walking: external vs. skeletal markers. Gait Posture 1997;6(2):98–109.
11. Holden JP, Orsini JA, Siegel KL, Kepple TM, Gerber LH, Stanhope SJ. Surface movement errors in shank kinematics and knee kinetics during gait. Gait Posture 1997;5(3):217–27.
12. Hume DR, Kefala V, Harris MD, Shelburne KB. Comparison of marker-based and stereo radiography knee kinematics in activities of daily living. Ann Biomed Eng 2018;46(11):1806–15.
13. Kessler SE, Rainbow MJ, Lichtwark GA, et al. A direct comparison of biplanar videoradiography and optical motion capture for foot and ankle kinematics. Front Bioeng Biotechnol 2019;7:199.
14. Li K, Zheng L, Tashman S, Zhang X. The inaccuracy of surface-measured model-derived tibiofemoral kinematics. J Biomech 2012;45(15):2719–23.
15. Chiari L, Della Croce U, Leardini A, Cappozzo A. Human movement analysis using stereophotogrammetry. Part 2: instrumental errors. Gait Posture 2005;21(2):197–211.
16. Ueno R. Calibrationless monocular vision musculoskeletal simulation during gait. Heliyon 2024;10(11)e32078.
17. Yu J, Ma X, Qi S, et al. Key transition technology of ski jumping based on inertial motion unit, kinematics and dynamics. Biomed Eng Online 2023;22(1):21.
18. Li Z, Wang J, Zhang T, et al. Real‐time capture of snowboarder’s skiing motion using a 3D vision sensor. Wirel Commun Mob Comput 2021;2021(1):8517771.
19. Kim H, Miyakoshi M, Iversen JR. Approaches for hybrid coregistration of marker-based and markerless coordinates describing complex body/object interactions. Sensors (Basel) 2023;23(14)
20. Kanko RM, Laende EK, Davis EM, Selbie WS, Deluzio KJ. Concurrent assessment of gait kinematics using marker-based and markerless motion capture. J Biomech 2021;127:110665.
21. Kanko RM, Laende EK, Strutzenberger G, et al. Assessment of spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system. J Biomech 2021;122:110414.
22. Mathis A, Mamidanna P, Cury KM, et al. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 2018;21(9):1281–9.
23. Nath T, Mathis A, Chen AC, Patel A, Bethge M, Mathis MW. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat Protoc 2019;14(7):2152–76.
24. Harsted S, Holsgaard-Larsen A, Hestbæk L, Boyle E, Lauridsen HH. Concurrent validity of lower extremity kinematics and jump characteristics captured in pre-school children by a markerless 3D motion capture system. Chiropr Man Therap 2019;27:39.
25. Harsted S, Holsgaard-Larsen A, Hestbæk L, Andreasen DL, Lauridsen HH. Test-retest reliability and agreement of lower-extremity kinematics captured in squatting and jumping preschool children using markerless motion capture technology. Front Digit Health 2022;4:1027647.
26. Hewett TE, Myer GD, Ford KR, et al. Biomechanical measures of neuromuscular control and valgus loading of the knee predict anterior cruciate ligament injury risk in female athletes: a prospective study. Am J Sports Med 2005;33(4):492–501.
27. Larwa J, Stoy C, Chafetz RS, Boniello M, Franklin C. Stiff landings, core stability, and dynamic knee valgus: A systematic review on documented anterior cruciate ligament ruptures in male and female athletes. Int J Environ Res Public Health 2021;18(7):3826.
28. Shin CS, Chaudhari AM, Andriacchi TP. Valgus plus internal rotation moments increase anterior cruciate ligament strain more than either alone. Med Sci Sports Exerc 2011;43(8):1484–91.

Article information Continued