Knowledge

Finger tracking

Source đź“ť

82: 404:, which codes the position of the hand and the angles of the finger’s joint. Each hand configuration generates a set of images through the detection of the borders of the occlusion of the finger’s joint. The estimation of each image is calculated by finding the state vector that better fits to the measured characteristics. The finger joints have the added 21 states more than the rigid body movement of the palms; this means that the cost computational of the estimation is increased. The technique consists of label each finger joint links is modeled as a cylinder. We do the axes at each joint and 22: 392:, the legs and hands can be modeled as articulated mechanisms, system of rigid bodies that are connected between them to articulations with one or more degrees of freedom. This model can be applied to a more reduced scale to describe hand motion and based on a wide scale to describe a complete body motion. A certain finger motion, for example, can be recognized from its usual angles and it does not depend on the position of the hand in relation to the camera. 449:
of any image processing at all (essentially, it is a rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area to a very narrow window precisely the size of the target). Gesture recognition has been demonstrated with this system. The sampling rate can be very high (500 Hz), enabling smooth trajectories to be acquired without the need of filtering (such as Kalman).
87: 86: 83: 88: 322:, malformation of the marker or occlusion. The system allows following the object, even though if some markers are not visible. Because of the spatial relationships of all the markers are known, the positions of the markers that are not visible can be computed by using the markers that are known. There are several methods for marker detection like border marker and estimated marker methods. 196:, located on each finger segment. Precise capture requires at least 16 sensors to be used. There are also mocap glove models with less sensors (13 / 7 sensors) for which the rest of the finger segments is interpolated (proximal segments) or extrapolated (distal segments). The sensors are typically inserted into textile glove which makes the use of the sensors more comfortable. 85: 275:, like Vicon or ART, are able to capture hand motion through markers. In each hand we have a marker per each “operative” finger. Three high-resolution cameras are responsible for capturing each marker and measure its positions. This will be only produced when the camera is able to see them. The visual markers, usually known as rings or bracelets, are used to 465:, companies and projects directly in this case overturned. Thus such systems rarely have been used in consumer applications due to its high price and complexity. In any case, the main objective is to facilitate the task of executing commands to the computer via natural language or interacting gesture. 448:
It is also possible to perform active tracking of fingers. The Smart Laser Scanner is a marker-less finger tracking system using a modified laser scanner/projector developed at the University of Tokyo in 2003-2004. It is capable of acquiring three-dimensional coordinates in real time without the need
415:
as there is a wide variety of deployment thesis on this subject. Therefore, the steps and treatment technique are different depending on the purpose and needs of the person who will use this technique. Anyway, we can say that a very general way and in most systems, you should carry out the following
469:
StudioMax employ these kinds of tools. The reason is to allow a more accurate and simple control of the instructions that we want to execute. This technology offers many possibilities, where the sculpture, building and modeling in 3D in real time through the use of a computer is the most important.
307:, which are usually already set and we have the knowledge about the regions. Because of that, it is not necessary to follow each marker all the time; the multipointers can be treated in the same way when there is only one operating pointer. To detect such pointers through an interaction, we enable 429:
Characteristic extraction: location of the fingertips and to detect if it is a peak or a valley. To classify the point, peaks or valleys, these are transformed to 3D vectors, usually named pseudo vectors in the xy-plane, and then to compute the cross product. If the sign of the z component of the
346:
Because of marker occlusion during capture, tracking fingers is the most challenging part for optical motion capture systems (like Vicon, Optitrack, ART, ..). Users of optical mocap systems claims that the most post-process work is usually due to finger capture. As the inertial mocap systems (if
249:
The metal materials use to interfere with sensors. This problem may be noticeable mainly because hands are often in contact with different things, often made of metal. Current generations of motion capture gloves are able to withstand magnetic interference. The degree to which they are immune to
468:
The objective is centered on the following idea computers should be easier in terms of usage if there is a possibility to operate through natural language or gesture interaction. The main application of this technique is to highlight the 3D design and animation, where software like Maya and 3D
207:
Since inertial sensors track only rotations, the rotations have to be applied to some hand skeleton in order to get proper output. To get precise output (for example to be able to touch the fingertips), the hand skeleton has to be properly scaled to match the real hand. For this purpose manual
604: 379:
and the added realism. It uses a glove composed of a set of colors which are assigned according to the position of the fingers. This color test is limited to the vision system of the computers and based on the capture function and the position of the color, the position of the hand is known.
362:
Stretch sensor enabled motion capture systems use flexible parallel plate capacitors to detect differences in capacitance when the sensors stretch, bend, shear or are subjected to pressure. Stretch sensors are commonly silicone-based, which means they are unaffected by magnetic interference,
152:
and hand movements to be more intuitive, Finger tracking systems have been created. These systems track in real time the position in 3D and 2D of the orientation of the fingers of each marker and use the intuitive hand movements and gestures to interact.
395:
Many tracking systems are based on a model focused on a problem of sequence estimation, where a sequence of images is given and a model of changing, we estimate the 3D configuration for each photo. All the possible hand configurations are represented by
84: 250:
magnetic interference depends on manufacturer, price range and number of sensors used in the mocap glove. Notably, stretch sensors are silicone-based capacitors that are completely unaffected by magnetic interference.
347:
properly calibrated) are mostly without the need for post-process, the typical use for high end mocap users is to fuse data from inertial mocap systems (fingers) with optical mocap systems (body + position in space).
353:
of each frame for inertial and optical mocap system data source. This way any 3rd party software (for example MotionBuilder, Blender) can apply motions from two sources, independently of the mocap method used.
363:
occlusion or positional drift (common in inertial systems). The robust and flexible qualities of these sensors leads to high fidelity finger tracking and feature in mocap gloves produced by StretchSense.
224:
Capturing the whole body using an inertial mocap system (hand skeleton is attached at the end of the body skeleton kinematic chain). The position of the palm is determined from the body.
326:
The Homer technique includes ray selection with direct handling: An object is selected and then its position and orientation are handled like if it was connected directly to the hand.
259:
a tracking of the location of the markers and patterns in 3D is performed, the system identifies them and labels each marker according to the position of the user’s fingers. The
423:
Segmentation: a binary mask application is used to represent with a white color, the pixels that belong to the hand and to apply the black color to the foreground skin image.
291:
The visual occlusion is a very intuitive method to provide a more realistic viewpoint of the virtual information in three dimensions. The interfaces provide more natural
314:
sensors. The fact that many pointers can be handled as one, problems would be solved. In the case when we are exposed to operate under difficult conditions like bad
420:
Background subtraction: the idea is to convolve all the images that are captured with a Gauss filter of 5x5, and then these are scaled to reduce noisy pixel data.
649:, ACM SIGCHI 2005 (CHI '05) International Conference on Human Factors in Computing Systems, Portland, OR, USA April 2–07, 2005, pp. 1138 – 1139 (2005). 185:
Inertial motion capture systems are able to capture finger motion by reading the rotation of each finger segment in 3D space. Applying these rotations to
430:
cross product is positive, we consider that the point is a peak, and in the case that the result of the cross product is negative, it will be a valley.
140:
The finger tracking system is focused on user-data interaction, where the user interacts with virtual data, by handling through the fingers the
433:
Point and pinch gesture recognition: taking into account the points of reference that are visible (fingertips) a certain gesture is associated.
603:
Anderson, D., Yedidia, J., Frankel, J., Marks, J., Agarwala, A., Beardsley, P., Hodgins, J., Leigh, D., Ryall, K., & Sullivan, E. (2000).
439:: a procedure which consists on identify the position of the hands through the use of algorithms that compute the distances between positions. 478: 230:
Capturing position of the palm (forearm) using other position tracking method, widely used in VR headsets (for example HTC Vive Lighthouse).
199:
Inertial sensors can capture movement in all 3 directions, which means finger and thumb flexion, extension and abduction can be detected.
116:
technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in
483: 639: 375:. This simplicity results in less precision. It provides a new base for new interactions in the modeling, the control of the 65: 43: 682: 36: 727: 492: 707: 646: 632: 408:
of this axis is the projection of the joint. Hence we use 3 DOF, because there are only 3 degrees of movement.
161:
There are many options for the implementation of finger tracking, principally those used with or without an
732: 162: 145: 193: 120:. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an 618: 371:
Articulated hand tracking is simpler and less expensive than many methods because it only needs one
30: 677: 117: 692: 47: 496: 702: 535: 125: 8: 722: 276: 217: 101: 640:
VisionWand: interaction techniques for large displays using a passive wand tracked in 3D
539: 263:
in 3D of the labels of these markers are produced in real time with other applications.
192:
Hand inertial motion capture systems, like for example Synertial mocap gloves, use tiny
558: 523: 488: 436: 330: 742: 563: 502: 389: 304: 129: 628:. International Workshop on Automatic Face and Gesture Recognition. p. 179-183. 426:
Region extraction: left and right hand detection based on a comparison between them.
189:, the whole human hand can be tracked in real time, without occlusion and wireless. 737: 553: 543: 279:. In addition, as the classification indicates, these rings act as an interface in 105: 683:
https://web.archive.org/web/20091211043000/http://actuality-medical.com/Home.html
625: 548: 458: 280: 186: 148:
problem. The objective is to allow the communication between them and the use of
121: 113: 611: 350: 334: 292: 272: 174: 698: 605:
Tangible interaction + graphical interpretation: a new approach to 3D modeling
716: 239:
Inertial sensors have two main disadvantages connected with finger tracking:
524:"Temporal Control and Hand Movement Efficiency in Skilled Music Performance" 412: 220:
for the whole hand in space. Multiple methods can be used for this purpose:
144:
of a 3D object that we want to represent. This system was born based on the
567: 397: 462: 401: 319: 208:
measurement of the hand or automatic measurement extraction can be used.
308: 260: 141: 405: 376: 78:
High-resolution technique in gesture recognition and image processing
227:
Capturing position of the palm (forearm) using optical mocap system.
315: 311: 234: 341: 149: 93: 633:
Barehands: implement-free interaction with a wallmounted display
457:
Definitely, the finger tracking systems are used to represent a
678:
http://www.dgp.toronto.edu/~ravin/videos/graphite2006_proxy.mov
610:
Angelidis, A., Cani, M.-P., Wyvill, G., & King, S. (2004).
372: 687: 243:
Problems capturing the absolute position of the hand in space.
619:
Multi finger gestural interaction with 3D volumetric displays
522:
Goebl, W.; Palmer, C. (2013). Balasubramaniam, Ramesh (ed.).
693:
http://www.k2.t.u-tokyo.ac.jp/perception/SmartLaserTracking/
662: 581: 672: 631:
Ringel, M., Berg, H., Jin, Y., & Winograd, T. (2001).
617:
Grossman, T., Wigdor, D., & Balakrishnan, R. (2004).
461:. However its application has gone to professional level 667: 96:' fingers playing the same piece (slow motion, no sound) 349:
The process of fusing mocap data is based on matching
337:
through a virtual widget that acts as an intermediary.
286: 647:Smart Laser-Scanner for 3D Human-Machine Interface 216:On the top of finger tracking, many users require 357: 254: 180: 714: 411:In this case, it is the same as in the previous 235:Disadvantages of inertial motion capture systems 342:Fusing data with optical motion capture systems 479:3D data acquisition and object reconstruction 383: 333:that permit an indirect interaction with the 173:This system mostly uses inertial and optical 443: 366: 612:Swirling-Sweepers: Constant-volume modeling 521: 645:A. Cassinelli, S. Perrin and M. Ishikawa, 635:. CHI Extended Abstracts. p. 367-368. 582:"The world's leading motion capture glove" 329:The Conner technique presents a set of 3D 168: 557: 547: 211: 66:Learn how and when to remove this message 80: 29:This article includes a list of general 638:Cao, X. & Balakrishnan, R. (2003). 624:Freeman, W. & Weissman, C. (1995). 298: 715: 484:3D reconstruction from multiple images 156: 15: 626:Television control by hand gestures 13: 614:. Pacific Graphics. p. 10-15. 287:Occlusion as an interaction method 35:it lacks sufficient corresponding 14: 754: 656: 493:articulated body pose estimation 202: 20: 135: 574: 515: 452: 358:Stretch sensor finger tracking 255:Optical motion capture systems 181:Inertial motion capture gloves 124:in our computer, similar to a 1: 508: 607:. SIGGRAPH. p. 393-402. 549:10.1371/journal.pone.0050901 277:recognize user gesture in 3D 7: 688:http://www.dgp.toronto.edu/ 472: 10: 759: 497:capturing human likenesses 384:Tracking without interface 266: 146:human-computer interaction 663:http://www.synertial.com/ 444:Other tracking techniques 367:Articulated hand tracking 673:https://stretchsense.com 642:. UIST. p. 173-182. 303:Markers operate through 295:techniques over base 6. 728:Computing input devices 169:Tracking with interface 92:Finger tracking of two 50:more precise citations. 621:. UIST. p. 61-70. 495:especially to do with 212:Hand position tracking 97: 668:http://www.vicon.com/ 246:Magnetic interference 91: 299:Marker functionality 733:Gesture recognition 540:2013PLoSO...850901G 218:positional tracking 102:gesture recognition 489:3D pose estimation 305:interaction points 98: 503:4D reconstruction 390:visual perception 194:IMU based sensors 157:Types of tracking 89: 76: 75: 68: 750: 708:3D Hand Tracking 697:Finger tracking 596: 595: 593: 592: 578: 572: 571: 561: 551: 519: 106:image processing 100:In the field of 90: 71: 64: 60: 57: 51: 46:this article by 37:inline citations 24: 23: 16: 758: 757: 753: 752: 751: 749: 748: 747: 713: 712: 703:without markers 659: 653: 600: 599: 590: 588: 580: 579: 575: 520: 516: 511: 491:in general and 475: 459:virtual reality 455: 446: 437:Pose estimation 386: 369: 360: 348: 344: 335:virtual objects 301: 289: 273:optical systems 269: 257: 237: 214: 205: 198: 197: 191: 190: 187:kinematic chain 183: 171: 159: 138: 122:external device 114:high-resolution 110:finger tracking 81: 79: 72: 61: 55: 52: 42:Please help to 41: 25: 21: 12: 11: 5: 756: 746: 745: 740: 735: 730: 725: 711: 710: 705: 695: 690: 685: 680: 675: 670: 665: 658: 657:External links 655: 651: 650: 643: 636: 629: 622: 615: 608: 598: 597: 573: 513: 512: 510: 507: 506: 505: 500: 486: 481: 474: 471: 454: 451: 445: 442: 441: 440: 434: 431: 427: 424: 421: 385: 382: 368: 365: 359: 356: 343: 340: 339: 338: 327: 300: 297: 293:3D interaction 288: 285: 268: 265: 256: 253: 252: 251: 247: 244: 236: 233: 232: 231: 228: 225: 213: 210: 204: 201: 182: 179: 175:motion capture 170: 167: 158: 155: 137: 134: 77: 74: 73: 28: 26: 19: 9: 6: 4: 3: 2: 755: 744: 741: 739: 736: 734: 731: 729: 726: 724: 721: 720: 718: 709: 706: 704: 700: 699:using markers 696: 694: 691: 689: 686: 684: 681: 679: 676: 674: 671: 669: 666: 664: 661: 660: 654: 648: 644: 641: 637: 634: 630: 627: 623: 620: 616: 613: 609: 606: 602: 601: 587: 583: 577: 569: 565: 560: 555: 550: 545: 541: 537: 534:(1): e50901. 533: 529: 525: 518: 514: 504: 501: 498: 494: 490: 487: 485: 482: 480: 477: 476: 470: 466: 464: 460: 450: 438: 435: 432: 428: 425: 422: 419: 418: 417: 414: 409: 407: 403: 399: 393: 391: 381: 378: 374: 364: 355: 352: 336: 332: 328: 325: 324: 323: 321: 317: 313: 310: 306: 296: 294: 284: 282: 278: 274: 264: 262: 248: 245: 242: 241: 240: 229: 226: 223: 222: 221: 219: 209: 203:Hand skeleton 200: 195: 188: 178: 176: 166: 164: 154: 151: 147: 143: 133: 131: 127: 123: 119: 115: 111: 107: 103: 95: 70: 67: 59: 49: 45: 39: 38: 32: 27: 18: 17: 652: 589:. Retrieved 586:StretchSense 585: 576: 531: 527: 517: 467: 456: 447: 410: 394: 388:In terms of 387: 370: 361: 345: 320:motion blurs 316:illumination 302: 290: 271:Some of the 270: 258: 238: 215: 206: 184: 172: 160: 139: 136:Introduction 109: 99: 62: 53: 34: 463:3D modeling 453:Application 402:state space 261:coordinates 56:August 2016 48:introducing 723:3D imaging 717:Categories 591:2020-11-24 509:References 351:time codes 309:ultrasound 142:volumetric 31:references 377:animation 177:systems. 163:interface 743:Tracking 568:23300946 528:PLOS ONE 473:See also 413:typology 406:bisector 312:infrared 150:gestures 126:keyboard 94:pianists 738:Fingers 559:3536780 536:Bibcode 416:steps: 398:vectors 331:widgets 267:Markers 44:improve 566:  556:  373:camera 128:and a 33:, but 400:on a 130:mouse 112:is a 564:PMID 104:and 701:or 554:PMC 544:doi 719:: 584:. 562:. 552:. 542:. 530:. 526:. 318:, 283:. 281:2D 165:. 132:. 118:3D 108:, 594:. 570:. 546:: 538:: 532:8 499:. 69:) 63:( 58:) 54:( 40:.

Index

references
inline citations
improve
introducing
Learn how and when to remove this message
pianists
gesture recognition
image processing
high-resolution
3D
external device
keyboard
mouse
volumetric
human-computer interaction
gestures
interface
motion capture
kinematic chain
IMU based sensors
positional tracking
coordinates
optical systems
recognize user gesture in 3D
2D
3D interaction
interaction points
ultrasound
infrared
illumination

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑