Knowledge

2D to 3D conversion

Source đź“ť

276: 200:– defining the range of permitted disparity or depth, what depth value corresponds to the screen position (so-called "convergence point" position), the permitted distance ranges for out-of-the-screen effects and behind-the-screen background objects. If an object in stereo pair is in exactly the same spot for both eyes, then it will appear on the screen surface and it will be in zero parallax. Objects in front of the screen are said to be in negative parallax, and background imagery behind the screen is in positive parallax. There are the corresponding negative or positive offsets in object positions for left and right eye images. 260:" is created for each frame or for a series of homogenous frames to indicate depths of objects present in the scene. The depth map is a separate grayscale image having the same dimensions as the original 2D image, with various shades of gray to indicate the depth of every part of the frame. While depth mapping can produce a fairly potent illusion of 3D objects in the video, it inherently does not support semi-transparent objects or areas, nor does it represent occluded surfaces; to emphasize this limitation, depth-based 3D representations are often explicitly referred to as 212:– left or right view images show a scene from a different angle, and parts of objects or entire objects covered by the foreground in the original 2D image should become visible in a stereo pair. Sometimes the background surfaces are known or can be estimated, so they should be used for filling uncovered areas. Otherwise the unknown areas must be filled in by an artist or 313:
propose to use the original image as one eye's image and to generate only the other eye's image to minimize the conversion cost. During stereo generation, pixels of the original image are shifted to the left or to the right depending on depth map, maximum selected parallax, and screen surface position.
384:
Approaches of this type are also called "depth from defocus" and "depth from blur". On "depth from defocus" (DFD) approaches, the depth information is estimated based on the amount of blur of the considered object, whereas "depth from focus" (DFF) approaches tend to compare the sharpness of an object
342:
A development on depth mapping, multi-layering works around the limitations of depth mapping by introducing several layers of grayscale depth masks to implement limited semi-transparency. Similar to a simple technique, multi-layering involves applying a depth map to more than one "slice" of the flat
179:
By their very nature, stereo cameras have restrictions on how far the camera can be from the filmed subject and still provide acceptable stereo separation. For example, the simplest way to film a scene set on the side of a building might be to use a camera rig from across the street on a neighboring
304:
Depth map creation. Each isolated surface should be assigned a depth map. The separate depth maps should be composed into a scene depth map. This is an iterative process requiring adjustment of objects, shapes, depth, and visualization of intermediate results in stereo. Depth micro-relief, 3D shape
312:
with any supplemental information like clean plates, restored background, transparency maps, etc. When the process is complete, a left and right image will have been created. Usually the original 2D image is treated as the center image, so that two stereo views are generated. However, some methods
113:
Computer animated 2D films made with 3D models can be re-rendered in stereoscopic 3D by adding a second virtual camera if the original data is still available. This is technically not a conversion; therefore, such re-rendered films have the same quality as films originally produced in stereoscopic
89:
to digital images perceived by the brain, thus, if done properly, greatly improving the immersive effect while viewing stereo video in comparison to 2D video. However, in order to be successful, the conversion should be done with sufficient accuracy and correctness: the quality of the original 2D
466:
HV3D quality metric has been designed having the human visual 3D perception in mind. It takes into account the quality of the individual right and left views, the quality of the cyclopean view (the fusion of the right and left view, what the viewer perceives), as well as the quality of the depth
354:
and re-projection may be used for stereo conversion. It involves scene 3D model creation, extraction of original image surfaces as textures for 3D objects and, finally, rendering the 3D scene from two virtual cameras to acquire stereo video. The approach works well enough in case of scenes with
426:
Use of a “rubber sheet” technique, defined as warping the pixels surrounding the occlusion regions to avoid explicit occlusion filling. In such cases, the edges of the displacement map are blurred and the transition between foreground and background regions is smoothed. The region occupied by
457:
PQM mimic the HVS as the results obtained aligns very closely to the Mean Opinion Score (MOS) obtained from subjective tests. The PQM quantifies the distortion in the luminance, and contrast distortion using an approximation (variances) weighted by the mean of each pixel block to obtain the
375:
It is possible to automatically estimate depth using different types of motion. In case of camera motion, a depth map of the entire scene can be calculated. Also, object motion can be detected and moving areas can be assigned with smaller depth values than the background. Occlusions provide
358:
Another method is to set up both left and right virtual cameras, both offset from the original camera but splitting the offset difference, then painting out occlusion edges of isolated objects and characters. Essentially clean-plating several background, mid ground and foreground elements.
184:
Even in the case of stereo shooting, conversion can frequently be necessary. Besides hard-to-shoot scenes, there can be mismatches in stereo views that are too big to adjust, and it is simpler to perform 2D to stereo conversion, treating one of the stereo views as the original 2D source.
268: 126:. Revisiting the original computer data for the two films took four months, as well as an additional six months to add the 3D. However, not all CGI films are re-rendered for the 3D re-release because of the costs, time required, lack of skilled resources or missing computer data. 771:"Soltani, A. A., Huang, H., Wu, J., Kulkarni, T. D., & Tenenbaum, J. B. Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1511-1519)" 401:
The idea of the method is based on the fact that parallel lines, such as railroad tracks and roadsides, appear to converge with distance, eventually reaching a vanishing point at the horizon. Finding this vanishing point gives the farthest point of the whole image.
475:
The VQMT3D project includes several developed metrics for evaluating the quality of 2D to 3D conversion based on the cardboard effect, edge-sharpness mismatch, stuck-to-background objects, and comparison with the 2D version.
158:, tilt, color shift, reflections and glares in different positions) that should be fixed in post-production anyway because they ruin the 3D effect. This correction sometimes may have complexity comparable to stereo conversion. 423:- this artifact may appear due to a blurred depth map at the boundaries of objects. The border becomes precise in one view and blurred in another. The edge-sharpness mismatch artifact is typically caused by the following: 896: 144:, notable for its extensive stereo filming, contains several scenes shot in 2D and converted to stereo in post-production. Reasons for shooting in 2D instead of stereo can be financial, technical and sometimes artistic: 330:
There are various automation techniques for depth map creation and background reconstruction. For example, automatic depth estimation can be used to generate initial depth maps for certain frames and shots.
389:
in order to find out its distance to the camera. DFD only needs two or three at different focus to properly work, whereas DFF needs 10 to 15 images at least but is more accurate than the previous method.
393:
If the sky is detected in the processed image, it can also be taken into account that more distant objects, besides being hazy, should be more desaturated and more bluish because of a thick air layer.
94:. If done properly and thoroughly, the conversion produces stereo video of similar quality to "native" stereo video which is shot in stereo and accurately adjusted and aligned in post-production. 151:
Professional stereoscopic rigs are much more expensive and bulky than customary monocular cameras. Some shots, particularly action scenes, can be only shot with relatively small 2D cameras.
747: 417:
is a phenomenon in which 3D objects located at different depths appear flat to the audience, as if they were made of cardboard, while the relative depth between the objects is preserved
427:
edge/motion blur is either “stretched” or “tucked,” depending on the direction of object displacement. Naturally, this approach leads to mismatches in edge sharpness between the views.
903: 405:
The more the lines converge, the farther away they appear to be. So, for depth map, the area between two neighboring vanishing lines can be approximated with a gradient plane.
343:
image, resulting in a much better approximation of depth and protrusion. The more layers are processed separately per frame, the higher the quality of 3D illusion tends to be.
327:
Time-consuming steps are image segmentation/rotoscoping, depth map creation and uncovered area filling. The latter is especially important for the highest quality conversion.
97:
Two approaches to stereo conversion can be loosely defined: quality semiautomatic conversion for cinema and high quality 3DTV, and low-quality automatic conversion for cheap
176:
to allow two actors to appear to be different physical sizes. The same scene filmed in stereo would reveal that the actors were not the same distance from the camera.
180:
building, using a zoom lens. However, while the zoom lens would provide acceptable image quality, the stereo separation would be virtually nil over such a distance.
305:
is added to most important surfaces to prevent the "cardboard" effect when stereo imagery looks like a combination of flat images just set at different depths.
275: 652: 829: 920:
Joveluro, P.; Malekmohamadi, H.; Fernando, W. A. C; Kondoz, A. M. (2010). "Perceptual Video Quality Metric for 3D video quality assessment".
723: 503:– many of the issues involved in 3D conversion, such as object edge identification/recognition, are also encountered in colorization 355:
static rigid objects like urban shots with buildings, interior shots, but has problems with non-rigid bodies and soft fuzzy edges.
148:
Stereo post-production workflow is much more complex and not as well-established as 2D workflow, requiring more work and rendering.
1505: 559: 537: 953:
Banitalebi-Dehkordi, Amin; Pourazad, Mahsa T.; Nasiopoulos, Panos (2013). "3D video quality metric for 3D video compression".
1048: 980: 937: 166: 206:
depending on scene type and motion – too much parallax or conflicting depth cues may cause eye-strain and nausea effects
301:. Each important surface should be isolated. The level of detail depends on the required conversion quality and budget. 90:
images should not deteriorate, and the introduced disparity cue should not contradict other cues used by the brain for
902:. Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology. Archived from 699: 675: 1369: 519:– many S-3D video games do not actually render two images but employ 2D + depth rendering conversion techniques too 1145: 863: 790: 1531: 1311: 601: 1321: 649: 134:
With the increase of films released in 3D, 2D to 3D conversion has become more common. The majority of non-
833: 1474: 803: 230:
Fuzzy semitransparent object borders – such as hair, fur, foreground out-of-focus objects, thin objects
135: 634: 430:
Lack of proper treatment of semitransparent edges, potentially resulting in edge doubling or ghosting.
1336: 1331: 1095: 1041: 660:
International 3D Society University. Presentation from the October 21, 2011 3DU-Japan event in Tokyo.
193:
Without respect to particular algorithms, all conversion workflows should solve the following tasks:
458:
distortion in an image. This distortion is subtracted from 1 to obtain the objective quality score.
1432: 1326: 1163: 261: 86: 1404: 1399: 1248: 516: 490: 701:
When 2.5D is not enough: Simultaneous reconstruction, segmentation and recognition on dense SLAM
287:
Depth budget allocation – how much total depth in the scene and where the screen plane will be.
253:
Most semiautomatic methods of stereo conversion use depth maps and depth-image-based rendering.
1296: 1291: 1100: 522: 770: 279:
Generating and reconstructing 3D shapes from single or multi-view depth maps or silhouettes
1546: 1188: 1120: 1034: 8: 1526: 1228: 1223: 1140: 1135: 1012:
Kinect-Variety Fusion: A Novel Hybrid Approach for Artifacts-Free 3DTV Content Generation
572: 1536: 1258: 1253: 1218: 958: 294: 290: 220:
High quality conversion methods should also deal with many typical problems including:
173: 140: 138:
stereo 3D blockbusters are converted fully or at least partially from 2D footage. Even
433:
Simple occlusion-filling techniques leading to stretching artifacts near object edges.
316:
Reconstruction and painting of any uncovered areas not filled by the stereo generator.
1541: 1394: 1301: 1115: 1105: 1080: 976: 933: 922:
2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video
511: 500: 351: 1459: 1452: 1213: 1198: 1110: 1090: 1085: 1015: 1014:. In 22nd International Conference on Pattern Recognition (ICPR), Stockholm, 2014. 968: 925: 162: 91: 154:
Stereo cameras can introduce various mismatches in stereo image (such as vertical
1316: 1203: 1178: 1125: 656: 485: 102: 972: 946: 846: 1442: 1173: 1075: 321: 929: 913: 878: 264:. These and other similar issues should be dealt with via a separate method. 1520: 1500: 1464: 1389: 1364: 1286: 1281: 1026: 995: 527: 309: 1447: 1354: 732: 386: 77:, so it is the process of creating imagery for each eye from one 2D image. 1019: 1479: 1306: 1238: 1168: 1057: 1009: 879:
Automatic 2D to 2D-plus-Depth conversion sample for a camera motion scene
298: 122: 74: 749:
3D Scene Reconstruction with Multi-layer Depth and Epipolar Transformers
1384: 1379: 1243: 1233: 1208: 1193: 1183: 1130: 1061: 532: 495: 213: 952: 320:
Stereo can be presented in any format for preview purposes, including
257: 116: 1484: 1469: 1427: 1422: 963: 919: 684: 506: 155: 867:
Department of Electronic Engineering, City University of Hong Kong
1437: 1374: 1359: 733:
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
606: 70: 267: 775: 756: 709:
IEEE International Conference on Robotics and Automation (ICRA)
443:- this error of "sticking" foreground objects to the background 362:
Binocular disparity can also be derived from simple geometry.
685:
Conference on Neural Information Processing Systems (NeurIPS)
708: 256:
The idea is that a separate auxiliary picture known as the "
98: 108: 334:
People engaged in such work may be called depth artists.
233:
Film grain (real or artificial) and similar noise effects
114:
3D. Examples of this technique include the re-release of
864:
Automatic 2D-to-3D Video Conversion Techniques for 3DTV
757:
IEEE International Conference on Computer Vision (ICCV)
283:
The major steps of depth-based conversion methods are:
1010:
Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2014).
52:
Movies, television shows, social media, printed images
376:
information on relative position of moving surfaces.
165:
used during filming. For example, some scenes in the
69:) is the process of transforming 2D ("flat") film to 821: 650:
State-of-the-Art 2D to 3D Conversion and Stereo VFX
243: 239:
Small particles – rain, snow, explosions and so on.
677:MarrNet: 3D Shape Reconstruction via 2.5D Sketches 715: 560:"2D – 3D Conversion Can Be Better Than Native 3D" 554: 552: 216:, since the exact reconstruction is not possible. 1518: 847:Converting a 2D picture to a 3D Lenticular Print 840: 595: 593: 725:Completing 3D Object Shape from One Depth Image 669: 667: 129: 1056: 890: 888: 886: 801: 783: 549: 1042: 629: 627: 625: 623: 590: 830:"Masking Multiple Layers in Adobe Photoshop" 664: 385:over a range of images taken with different 883: 602:"Art of Stereo Conversion: 2D to 3D – 2012" 1049: 1035: 691: 620: 16:Process of transforming 2D film to 3D form 962: 739: 644: 642: 396: 248: 564: 274: 266: 894: 599: 408: 365: 109:Re-rendering of computer animated films 85:2D-to-3D conversion adds the binocular 1519: 1506:Stereoscopic Displays and Applications 989: 804:"Interview with 3D Artist Adam Hlavac" 697: 639: 570: 538:3D reconstruction from multiple images 1030: 858: 856: 854: 763: 698:Tateno, Keisuke; et al. (2016). 447: 36:Film and television, print production 745: 721: 370: 379: 346: 188: 73:form, which in almost all cases is 13: 851: 827: 802:Mike Eisenberg (31 October 2011). 791:2D–to–Stereo 3D Conversion Process 746:Shin, Daeyun; et al. (2019). 673: 571:Murphy, Mekado (October 1, 2009). 41:Main technologies or sub-processes 14: 1558: 722:Rock, Jason; et al. (2015). 337: 674:Wu, Jiajun; et al. (2017). 573:"Buzz and Woody Add a Dimension" 244:Quality semiautomatic conversion 204:Control of comfortable disparity 1146:Vergence-accommodation conflict 897:"Converting 2D to 3D: A Survey" 872: 795: 236:Scenes with fast erratic motion 1: 1312:Stereo photography techniques 543: 1322:Stereoscopic depth rendition 600:Seymour, Mike (2012-05-08). 198:Allocation of "depth budget" 130:Importance and applicability 7: 973:10.1109/ivmspw.2013.6611930 479: 441:Stuck to background objects 308:Stereo generation based on 80: 10: 1563: 1003: 210:Filling of uncovered areas 161:Stereo cameras can betray 105:and similar applications. 63:2D to stereo 3D conversion 1493: 1413: 1345: 1337:Stereoscopic video coding 1332:Stereoscopic spectroscopy 1267: 1154: 1096:Convergence insufficiency 1068: 930:10.1109/3dtv.2010.5506331 470: 59:2D to 3D video conversion 48: 40: 32: 24: 1433:Fujifilm FinePix Real 3D 1370:3D-enabled mobile phones 1327:Stereoscopic rangefinder 1164:Active shutter 3D system 1405:Virtual reality headset 1400:Stereoscopic video game 1249:Virtual retinal display 517:Stereoscopic video game 491:Crosstalk (electronics) 461: 421:Edge sharpness mismatch 271:An example of depth map 1297:Multiview Video Coding 1292:Computer stereo vision 1101:Correspondence problem 957:. IEEE. pp. 1–4. 924:. IEEE. pp. 1–4. 452: 397:Depth from perspective 280: 272: 249:Depth-based conversion 1020:10.1109/ICPR.2014.395 759:. pp. 2172–2182. 735:. pp. 2484–2493. 711:. pp. 2295–2302. 523:Structure from motion 297:or masks, usually by 278: 270: 1532:3D graphics software 1189:Head-mounted display 1121:Kinetic depth effect 836:on January 18, 2012. 635:2D to 3D Conversions 409:Conversion artifacts 366:Automatic conversion 33:Industrial sector(s) 1277:2D to 3D conversion 1229:Specular holography 1224:Polarized 3D system 1141:Stereoscopic acuity 1136:Stereopsis recovery 687:. pp. 540–550. 224:Translucent objects 87:disparity depth cue 21: 20:2D to 3D conversion 1259:Wiggle stereoscopy 1254:Volumetric display 1219:Parallax scrolling 655:2012-04-26 at the 577:The New York Times 448:3D quality metrics 291:Image segmentation 281: 273: 174:forced perspective 172:were filmed using 19: 1514: 1513: 1475:Sharp Actius RD3D 1395:Stereo microscope 1302:Parallax scanning 1116:Epipolar geometry 1106:Peripheral vision 1081:Binocular rivalry 982:978-1-4673-5858-3 939:978-1-4244-6377-0 512:Lists of 3D films 501:Film colorization 371:Depth from motion 352:3D reconstruction 168:Lord of the Rings 163:practical effects 67:stereo conversion 56: 55: 44:Computer software 28:digital and print 1554: 1460:Nvidia 3D Vision 1214:Parallax barrier 1199:Integral imaging 1111:Depth perception 1091:Chromostereopsis 1086:Binocular vision 1051: 1044: 1037: 1028: 1027: 1023: 998: 993: 987: 986: 966: 950: 944: 943: 917: 911: 910: 908: 901: 892: 881: 876: 870: 862:Dr. Lai-Man Po. 860: 849: 844: 838: 837: 832:. Archived from 825: 819: 818: 816: 814: 799: 793: 787: 781: 780: 767: 761: 760: 754: 743: 737: 736: 730: 719: 713: 712: 706: 695: 689: 688: 682: 671: 662: 646: 637: 631: 618: 617: 615: 614: 597: 588: 587: 585: 583: 568: 562: 556: 415:Cardboard effect 380:Depth from focus 347:Other approaches 189:General problems 92:depth perception 22: 18: 1562: 1561: 1557: 1556: 1555: 1553: 1552: 1551: 1517: 1516: 1515: 1510: 1489: 1415: 1409: 1347: 1341: 1317:Stereoautograph 1269: 1263: 1204:Lenticular lens 1179:Autostereoscopy 1156: 1150: 1126:Stereoblindness 1064: 1055: 1006: 1001: 994: 990: 983: 951: 947: 940: 918: 914: 906: 899: 893: 884: 877: 873: 869:. 13 April 2010 861: 852: 845: 841: 828:Cutler, James. 826: 822: 812: 810: 800: 796: 788: 784: 769: 768: 764: 752: 744: 740: 728: 720: 716: 704: 696: 692: 680: 672: 665: 657:Wayback Machine 647: 640: 633:Scott Squires. 632: 621: 612: 610: 598: 591: 581: 579: 569: 565: 558:Barry Sandrew. 557: 550: 546: 486:Autostereoscopy 482: 473: 464: 455: 450: 411: 399: 387:focus distances 382: 373: 368: 349: 340: 251: 246: 191: 132: 111: 83: 17: 12: 11: 5: 1560: 1550: 1549: 1544: 1539: 1534: 1529: 1512: 1511: 1509: 1508: 1503: 1497: 1495: 1491: 1490: 1488: 1487: 1482: 1477: 1472: 1467: 1462: 1457: 1456: 1455: 1445: 1443:MasterImage 3D 1440: 1435: 1430: 1425: 1419: 1417: 1411: 1410: 1408: 1407: 1402: 1397: 1392: 1387: 1382: 1377: 1372: 1367: 1362: 1357: 1351: 1349: 1343: 1342: 1340: 1339: 1334: 1329: 1324: 1319: 1314: 1309: 1304: 1299: 1294: 1289: 1284: 1279: 1273: 1271: 1265: 1264: 1262: 1261: 1256: 1251: 1246: 1241: 1236: 1234:Stereo display 1231: 1226: 1221: 1216: 1211: 1206: 1201: 1196: 1191: 1186: 1181: 1176: 1174:Autostereogram 1171: 1166: 1160: 1158: 1152: 1151: 1149: 1148: 1143: 1138: 1133: 1128: 1123: 1118: 1113: 1108: 1103: 1098: 1093: 1088: 1083: 1078: 1076:3D stereo view 1072: 1070: 1066: 1065: 1054: 1053: 1046: 1039: 1031: 1025: 1024: 1005: 1002: 1000: 999: 988: 981: 945: 938: 912: 909:on 2012-04-15. 882: 871: 850: 839: 820: 794: 782: 762: 738: 714: 690: 663: 638: 619: 589: 563: 547: 545: 542: 541: 540: 535: 530: 525: 520: 514: 509: 504: 498: 493: 488: 481: 478: 472: 469: 463: 460: 454: 451: 449: 446: 445: 444: 437: 436: 435: 434: 431: 428: 418: 410: 407: 398: 395: 381: 378: 372: 369: 367: 364: 348: 345: 339: 338:Multi-layering 336: 318: 317: 314: 306: 302: 293:, creation of 288: 250: 247: 245: 242: 241: 240: 237: 234: 231: 228: 225: 218: 217: 207: 201: 190: 187: 182: 181: 177: 159: 152: 149: 131: 128: 110: 107: 82: 79: 54: 53: 50: 46: 45: 42: 38: 37: 34: 30: 29: 26: 15: 9: 6: 4: 3: 2: 1559: 1548: 1545: 1543: 1540: 1538: 1535: 1533: 1530: 1528: 1525: 1524: 1522: 1507: 1504: 1502: 1501:Stereographer 1499: 1498: 1496: 1492: 1486: 1483: 1481: 1478: 1476: 1473: 1471: 1468: 1466: 1465:Panavision 3D 1463: 1461: 1458: 1454: 1451: 1450: 1449: 1446: 1444: 1441: 1439: 1436: 1434: 1431: 1429: 1426: 1424: 1421: 1420: 1418: 1412: 1406: 1403: 1401: 1398: 1396: 1393: 1391: 1390:Stereo camera 1388: 1386: 1383: 1381: 1378: 1376: 1373: 1371: 1368: 1366: 1365:3D television 1363: 1361: 1358: 1356: 1353: 1352: 1350: 1344: 1338: 1335: 1333: 1330: 1328: 1325: 1323: 1320: 1318: 1315: 1313: 1310: 1308: 1305: 1303: 1300: 1298: 1295: 1293: 1290: 1288: 1287:2D-plus-depth 1285: 1283: 1282:2D plus Delta 1280: 1278: 1275: 1274: 1272: 1266: 1260: 1257: 1255: 1252: 1250: 1247: 1245: 1242: 1240: 1237: 1235: 1232: 1230: 1227: 1225: 1222: 1220: 1217: 1215: 1212: 1210: 1207: 1205: 1202: 1200: 1197: 1195: 1192: 1190: 1187: 1185: 1182: 1180: 1177: 1175: 1172: 1170: 1167: 1165: 1162: 1161: 1159: 1153: 1147: 1144: 1142: 1139: 1137: 1134: 1132: 1129: 1127: 1124: 1122: 1119: 1117: 1114: 1112: 1109: 1107: 1104: 1102: 1099: 1097: 1094: 1092: 1089: 1087: 1084: 1082: 1079: 1077: 1074: 1073: 1071: 1067: 1063: 1059: 1052: 1047: 1045: 1040: 1038: 1033: 1032: 1029: 1021: 1017: 1013: 1008: 1007: 997: 992: 984: 978: 974: 970: 965: 960: 956: 949: 941: 935: 931: 927: 923: 916: 905: 898: 895:Qingqing We. 891: 889: 887: 880: 875: 868: 865: 859: 857: 855: 848: 843: 835: 831: 824: 809: 805: 798: 792: 786: 779:. 2019-07-11. 778: 777: 772: 766: 758: 751: 750: 742: 734: 727: 726: 718: 710: 703: 702: 694: 686: 679: 678: 670: 668: 661: 658: 654: 651: 648:Jon Karafin. 645: 643: 636: 630: 628: 626: 624: 609: 608: 603: 596: 594: 578: 574: 567: 561: 555: 553: 548: 539: 536: 534: 531: 529: 528:2D-plus-depth 526: 524: 521: 518: 515: 513: 510: 508: 505: 502: 499: 497: 494: 492: 489: 487: 484: 483: 477: 468: 467:information. 459: 442: 439: 438: 432: 429: 425: 424: 422: 419: 416: 413: 412: 406: 403: 394: 391: 388: 377: 363: 360: 356: 353: 344: 335: 332: 328: 325: 323: 315: 311: 307: 303: 300: 296: 292: 289: 286: 285: 284: 277: 269: 265: 263: 259: 254: 238: 235: 232: 229: 226: 223: 222: 221: 215: 211: 208: 205: 202: 199: 196: 195: 194: 186: 178: 175: 171: 169: 164: 160: 157: 153: 150: 147: 146: 145: 143: 142: 137: 127: 125: 124: 119: 118: 106: 104: 100: 95: 93: 88: 78: 76: 72: 68: 64: 61:(also called 60: 51: 47: 43: 39: 35: 31: 27: 23: 1448:Nintendo 3DS 1355:3D camcorder 1276: 1270:technologies 1157:technologies 1011: 991: 954: 948: 921: 915: 904:the original 874: 866: 842: 834:the original 823: 811:. Retrieved 807: 797: 785: 774: 765: 748: 741: 724: 717: 700: 693: 676: 659: 611:. Retrieved 605: 582:February 18, 580:. Retrieved 576: 566: 474: 465: 456: 440: 420: 414: 404: 400: 392: 383: 374: 361: 357: 350: 341: 333: 329: 326: 319: 282: 255: 252: 219: 209: 203: 197: 192: 183: 170:film trilogy 167: 139: 133: 121: 115: 112: 96: 84: 66: 62: 58: 57: 25:Process type 1547:Stereoscopy 1480:View-Master 1307:Pseudoscope 1239:Stereoscope 1169:Anaglyph 3D 1058:Stereoscopy 813:28 December 808:Screen Rant 299:rotoscoping 227:Reflections 123:Toy Story 2 1527:3D imaging 1521:Categories 1494:Miscellany 1385:Digital 3D 1380:Blu-ray 3D 1244:Vectograph 1209:Multiscopy 1194:Holography 1184:Bubblegram 1131:Stereopsis 1069:Perception 1062:3D display 964:1803.04629 955:Ivmsp 2013 613:2024-07-11 544:References 533:3D display 496:Digital 3D 49:Product(s) 1537:3D cinema 789:YUVsoft. 258:depth map 214:inpainted 117:Toy Story 1542:3D films 1485:XpanD 3D 1470:RealD 3D 1428:Dolby 3D 1423:AMD HD3D 1416:products 653:Archived 507:Legend3D 480:See also 322:anaglyph 310:2D+Depth 156:parallax 81:Overview 1453:New 3DS 1438:Infitec 1414:Notable 1375:4D film 1360:3D film 1346:Product 1155:Display 1004:Sources 607:fxguide 996:VQMT3D 979:  936:  776:GitHub 471:VQMT3D 295:mattes 141:Avatar 75:stereo 1348:types 1268:Other 959:arXiv 907:(PDF) 900:(PDF) 753:(PDF) 729:(PDF) 705:(PDF) 681:(PDF) 1060:and 977:ISBN 934:ISBN 815:2015 584:2010 462:HV3D 262:2.5D 120:and 99:3DTV 65:and 1016:doi 969:doi 926:doi 453:PQM 136:CGI 103:VOD 1523:: 975:. 967:. 932:. 885:^ 853:^ 806:. 773:. 755:. 731:. 707:. 683:. 666:^ 641:^ 622:^ 604:. 592:^ 575:. 551:^ 324:. 101:, 71:3D 1050:e 1043:t 1036:v 1022:. 1018:: 985:. 971:: 961:: 942:. 928:: 817:. 616:. 586:.

Index

3D
stereo
disparity depth cue
depth perception
3DTV
VOD
Toy Story
Toy Story 2
CGI
Avatar
parallax
practical effects
Lord of the Rings film trilogy
forced perspective
inpainted
depth map
2.5D


Image segmentation
mattes
rotoscoping
2D+Depth
anaglyph
3D reconstruction
focus distances
Autostereoscopy
Crosstalk (electronics)
Digital 3D
Film colorization

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑