Knowledge

Kernel method

Source đź“ť

1867: 2051: 2434: 1415: 2821: 2256: 1568: 2338: 1958: 1613: 2333: 3173: 2475: 3242: 2877: 3117: 1692: 2180: 1953: 1304: 2705: 1185: 1471: 1781: 2973: 1807: 2112: 1743: 1498: 1266: 1295: 3024: 2901: 2571: 2519: 2208: 2136: 1049:
of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "
3379: 2085: 2654: 2680: 1836: 3064: 3044: 2547: 2495: 877: 1212: 915: 3349: 3326: 3306: 3286: 3266: 2997: 2921: 2700: 2595: 2293: 1518: 1235: 1137: 3571:
Aizerman, M. A.; Braverman, Emmanuel M.; Rozonoer, L. I. (1964). "Theoretical foundations of the potential function method in pattern recognition learning".
872: 862: 3070:. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms. 703: 2213: 1525: 910: 1023:. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the 3884: 867: 718: 2046:{\displaystyle k(\mathbf {x} ,\mathbf {y} )=\mathbf {x} \cdot \mathbf {y} +\left\|\mathbf {x} \right\|^{2}\left\|\mathbf {y} \right\|^{2}} 449: 950: 753: 2298: 3122: 2439: 1573: 3178: 2826: 1003:) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into 829: 1119:: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the 1096: 378: 3079: 3847: 2429:{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.} 1625: 2605: 1027:. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing. 887: 650: 185: 905: 3894: 1873: 738: 713: 662: 3823: 3802: 3722: 3671: 3555: 786: 781: 434: 3483: 2141: 444: 82: 3470: 3046:. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute 1073: 943: 839: 603: 424: 1142: 1423: 1069: 814: 516: 292: 3738:
Honarkhah, M.; Caers, J. (2010). "Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling".
2053:. The training points are mapped to a 3-dimensional space where a separating hyperplane can be easily found. 1214:. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of a 3454: 1751: 771: 708: 618: 596: 439: 429: 17: 2926: 1792: 1410:{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),} 3688: 1100: 922: 834: 819: 280: 102: 1697: 3402: 1000: 979:(SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of 882: 809: 559: 454: 242: 175: 135: 2608:. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure the 1271: 3609: 3488: 1851: 1041:
without ever computing the coordinates of the data in that space, but rather by simply computing the
936: 542: 310: 180: 3005: 2882: 2552: 2500: 2295:. The computation is made much simpler if the kernel can be written in the form of a "feature map" 2189: 2117: 3740: 3593: 3245: 2976: 2816:{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.} 2090: 1476: 1244: 1031: 564: 484: 407: 325: 155: 117: 112: 72: 67: 3362: 2068: 3422: 2614: 1855: 1116: 511: 360: 260: 87: 2659: 2268:. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or 1812: 3889: 3588: 3418: 3067: 1847: 1099:
and are statistically well-founded. Typically, their statistical properties are analyzed using
976: 691: 667: 569: 330: 305: 265: 77: 3839:
Learning with Kernels : Support Vector Machines, Regularization, Optimization, and Beyond
3049: 3029: 2532: 2480: 2275:
Certain problems in machine learning have more structure than an arbitrary weighting function
3465: 2598: 2574: 1104: 645: 467: 419: 275: 190: 62: 3833: 3749: 1190: 992: 574: 524: 8: 3498: 3493: 3352: 2526: 2522: 2058: 1215: 1092: 1081: 1046: 1024: 1016: 677: 613: 584: 489: 315: 248: 234: 220: 195: 145: 97: 57: 3753: 3026:
would, in fact, have a linear interpretation in a different setting: the range space of
3765: 3641: 3621: 3503: 3356: 3334: 3311: 3291: 3271: 3251: 2982: 2906: 2685: 2604:
Mercer's theorem is similar to a generalization of the result from linear algebra that
2580: 2278: 1503: 1220: 1122: 655: 579: 365: 160: 3843: 3819: 3798: 3790: 3786: 3718: 3667: 3551: 3449: 3406: 3382: 2062: 1843: 1298: 1061: 748: 591: 504: 300: 270: 215: 210: 165: 107: 3869: 3769: 3645: 3757: 3631: 2609: 1842:
Kernel classifiers were described as early as the 1960s, with the invention of the
1783:
are the weights for the training examples, as determined by the learning algorithm;
1077: 1065: 984: 980: 972: 964: 776: 529: 479: 389: 373: 343: 205: 200: 150: 140: 38: 2477:
must be a proper inner product. On the other hand, an explicit representation for
3837: 3813: 3444: 3414: 2264: 1085: 804: 608: 474: 414: 3522: 3636: 3410: 1020: 1004: 824: 355: 92: 3761: 3288:
at least approximates the intuitive idea of similarity. Regardless of whether
1473:
is the kernelized binary classifier's predicted label for the unlabeled input
3878: 3659: 3460: 3434: 3394: 2183: 1787: 1042: 1038: 743: 672: 554: 285: 170: 3439: 3002:
Some algorithms that depend on arbitrary relationships in the native space
2251:{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} } 1570:
is the kernel function that measures similarity between any pair of inputs
1563:{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} } 1054: 2057:
The kernel trick avoids the explicit mapping that is needed to get linear
3074: 996: 549: 43: 3268:
that do not satisfy Mercer's condition may still perform reasonably if
698: 394: 320: 3248:. Empirically, for machine learning heuristics, choices of a function 3626: 857: 638: 1866: 2269: 3608:
Hofmann, Thomas; Scholkopf, Bernhard; Smola, Alexander J. (2008).
1850:(SVM) in the 1990s, when the SVM was found to be competitive with 3863: 3398: 988: 633: 3585:
Automatic capacity tuning of very large VC-dimension classifiers
2702:, then the integral in Mercer's theorem reduces to a summation 2328:{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}} 384: 3692: 3168:{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}} 983:
is to find and study general types of relations (for example
628: 623: 350: 3713:
Rasmussen, Carl Edward; Williams, Christopher K. I. (2006).
3393:
Application areas of kernel methods are diverse and include
2470:{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}} 1608:{\displaystyle \mathbf {x} ,\mathbf {x'} \in {\mathcal {X}}} 1053:". Kernel functions have been introduced for sequence data, 1011:: in contrast, kernel methods require only a user-specified 3237:{\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})} 2872:{\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})} 2823:
If this summation holds for all finite sequences of points
2606:
associates an inner product to any positive-definite matrix
1846:. They rose to great prominence with the popularity of the 3658: 3570: 1060:
Algorithms capable of operating with kernels include the
3112:{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}} 916:
List of datasets in computer vision and image processing
3815:
Kernel Adaptive Filtering: A Comprehensive Introduction
3785: 1687:{\displaystyle \{(\mathbf {x} _{i},y_{i})\}_{i=1}^{n}} 1110: 1034:, which enable them to operate in a high-dimensional, 3607: 3587:. Advances in neural information processing systems. 3583:
Guyon, Isabelle; Boser, B.; Vapnik, Vladimir (1993).
3365: 3337: 3314: 3294: 3274: 3254: 3181: 3125: 3082: 3052: 3032: 3008: 2985: 2929: 2909: 2885: 2829: 2708: 2688: 2662: 2617: 2583: 2555: 2535: 2503: 2483: 2442: 2341: 2301: 2281: 2216: 2192: 2144: 2120: 2093: 2071: 1961: 1876: 1815: 1795: 1754: 1700: 1628: 1576: 1528: 1506: 1479: 1426: 1307: 1274: 1247: 1223: 1193: 1145: 1125: 2682:, which counts the number of points inside the set 3832: 3373: 3343: 3320: 3300: 3280: 3260: 3236: 3167: 3111: 3058: 3038: 3018: 2991: 2967: 2915: 2895: 2871: 2815: 2694: 2674: 2648: 2589: 2565: 2541: 2513: 2489: 2469: 2428: 2327: 2287: 2250: 2202: 2174: 2130: 2106: 2079: 2045: 1947: 1830: 1801: 1775: 1737: 1686: 1607: 1562: 1512: 1492: 1465: 1409: 1301:typically computes a weighted sum of similarities 1289: 1260: 1229: 1206: 1179: 1131: 3811: 3712: 3662:; Rostamizadeh, Afshin; Talwalkar, Ameet (2012). 3175:(sometimes also called a "kernel matrix"), where 3066:directly during computation, as is the case with 1948:{\displaystyle \varphi ((a,b))=(a,b,a^{2}+b^{2})} 3876: 3582: 1861: 1809:determines whether the predicted classification 3737: 911:List of datasets for machine-learning research 3689:"Support Vector Machines: Mercer's Condition" 2175:{\displaystyle k(\mathbf {x} ,\mathbf {x'} )} 1030:Kernel methods owe their name to the use of 1019:over all pairs of data points computed using 944: 3162: 3126: 2456: 2443: 2412: 2372: 1732: 1714: 1664: 1629: 1460: 1442: 3870:onlineprediction.net Kernel Methods Article 3545: 3812:Liu, W.; Principe, J.; Haykin, S. (2010). 951: 937: 3691:. Support Vector Machines. Archived from 3635: 3625: 3592: 3093: 2244: 1769: 1556: 3328:may still be referred to as a "kernel". 1865: 1187:and learn for it a corresponding weight 1180:{\displaystyle (\mathbf {x} _{i},y_{i})} 27:Class of algorithms for pattern analysis 3715:Gaussian Processes for Machine Learning 1694:in the classifier's training set, with 1466:{\displaystyle {\hat {y}}\in \{-1,+1\}} 14: 3877: 1776:{\displaystyle w_{i}\in \mathbb {R} } 1007:representations via a user-specified 3610:"Kernel Methods in Machine Learning" 2968:{\displaystyle (c_{1},\dots ,c_{n})} 1802:{\displaystyle \operatorname {sgn} } 1115:Kernel methods can be thought of as 1091:Most kernel algorithms are based on 1057:, text, images, as well as vectors. 3885:Kernel methods for machine learning 3795:Kernel Methods for Pattern Analysis 1111:Motivation and informal explanation 906:Glossary of artificial intelligence 24: 3779: 3686: 3428: 3011: 2888: 2558: 2506: 2461: 2417: 2320: 2310: 2235: 2225: 2195: 2123: 1738:{\displaystyle y_{i}\in \{-1,+1\}} 1600: 1547: 1537: 25: 3906: 3857: 3836:; Smola, A. J.; Bach, F. (2018). 2529:: an implicitly defined function 2061:to learn a nonlinear function or 1064:, support-vector machines (SVM), 975:, whose best known member is the 3666:. US, Massachusetts: MIT Press. 3484:Kernel methods for vector output 3367: 3221: 3206: 3152: 3131: 3084: 2856: 2835: 2774: 2759: 2573:can be equipped with a suitable 2525:. The alternative follows from 2400: 2382: 2358: 2349: 2161: 2152: 2096: 2073: 2029: 2009: 1996: 1988: 1977: 1969: 1637: 1587: 1578: 1482: 1393: 1378: 1290:{\displaystyle \mathbf {x} _{i}} 1277: 1268:and each of the training inputs 1250: 1151: 3664:Foundations of Machine Learning 3471:Neural network Gaussian process 3388: 1838:comes out positive or negative. 3797:. Cambridge University Press. 3731: 3706: 3680: 3652: 3601: 3564: 3539: 3515: 3231: 3201: 3019:{\displaystyle {\mathcal {X}}} 2999:satisfies Mercer's condition. 2962: 2930: 2896:{\displaystyle {\mathcal {X}}} 2866: 2830: 2784: 2754: 2642: 2634: 2627: 2621: 2566:{\displaystyle {\mathcal {X}}} 2514:{\displaystyle {\mathcal {V}}} 2408: 2395: 2386: 2378: 2366: 2345: 2315: 2240: 2203:{\displaystyle {\mathcal {V}}} 2169: 2148: 2131:{\displaystyle {\mathcal {X}}} 2033: 2025: 2013: 2005: 1981: 1965: 1942: 1904: 1898: 1895: 1883: 1880: 1822: 1660: 1632: 1552: 1433: 1401: 1373: 1314: 1297:. For instance, a kernelized 1241:, between the unlabeled input 1174: 1146: 1074:canonical correlation analysis 971:are a class of algorithms for 326:Relevance vector machine (RVM) 13: 1: 3573:Automation and Remote Control 3550:. Elsevier B.V. p. 203. 3546:Theodoridis, Sergios (2008). 3509: 2497:is not necessary, as long as 2107:{\displaystyle \mathbf {x'} } 1862:Mathematics: the kernel trick 1493:{\displaystyle \mathbf {x'} } 1261:{\displaystyle \mathbf {x'} } 1070:principal components analysis 815:Computational learning theory 379:Expectation–maximization (EM) 3455:Radial basis function kernel 3374:{\displaystyle \mathbf {K} } 3246:positive semi-definite (PSD) 2436:The key restriction is that 2080:{\displaystyle \mathbf {x} } 772:Coefficient of determination 619:Convolutional neural network 331:Support vector machine (SVM) 7: 3477: 2649:{\displaystyle \mu (T)=|T|} 1101:statistical learning theory 923:Outline of machine learning 820:Empirical risk minimization 10: 3911: 3637:10.1214/009053607000000677 3403:inverse distance weighting 2675:{\displaystyle T\subset X} 2549:exists whenever the space 2258:is often referred to as a 1831:{\displaystyle {\hat {y}}} 560:Feedforward neural network 311:Artificial neural networks 3895:Classification algorithms 3762:10.1007/s11004-010-9276-7 3489:Kernel density estimation 2923:real-valued coefficients 1870:SVM with kernel given by 543:Artificial neural network 3741:Mathematical Geosciences 3614:The Annals of Statistics 3059:{\displaystyle \varphi } 3039:{\displaystyle \varphi } 2977:positive definite kernel 2542:{\displaystyle \varphi } 2490:{\displaystyle \varphi } 1618:the sum ranges over the 1500:whose hidden true label 852:Journals and conferences 799:Mathematical foundations 709:Temporal difference (TD) 565:Recurrent neural network 485:Conditional random field 408:Dimensionality reduction 156:Dimensionality reduction 118:Quantum machine learning 113:Neuromorphic engineering 73:Self-supervised learning 68:Semi-supervised learning 3423:handwriting recognition 3359:, then the Gram matrix 3331:If the kernel function 3068:support-vector machines 2182:can be expressed as an 1856:handwriting recognition 1117:instance-based learners 1086:linear adaptive filters 261:Apprenticeship learning 3419:information extraction 3375: 3345: 3322: 3302: 3282: 3262: 3238: 3169: 3113: 3060: 3040: 3020: 2993: 2969: 2917: 2897: 2873: 2817: 2750: 2729: 2696: 2676: 2650: 2591: 2577:ensuring the function 2567: 2543: 2515: 2491: 2471: 2430: 2329: 2289: 2252: 2204: 2176: 2132: 2108: 2081: 2054: 2047: 1949: 1848:support-vector machine 1832: 1803: 1777: 1739: 1688: 1609: 1564: 1514: 1494: 1467: 1411: 1349: 1291: 1262: 1231: 1208: 1181: 1133: 977:support-vector machine 810:Bias–variance tradeoff 692:Reinforcement learning 668:Spiking neural network 78:Reinforcement learning 3466:Neural tangent kernel 3381:can also be called a 3376: 3346: 3323: 3303: 3283: 3263: 3239: 3170: 3114: 3061: 3041: 3021: 2994: 2979:), then the function 2970: 2918: 2898: 2874: 2818: 2730: 2709: 2697: 2677: 2651: 2592: 2568: 2544: 2516: 2492: 2472: 2431: 2330: 2290: 2253: 2205: 2177: 2133: 2109: 2082: 2048: 1950: 1869: 1833: 1804: 1778: 1740: 1689: 1610: 1565: 1515: 1495: 1468: 1412: 1329: 1292: 1263: 1232: 1209: 1207:{\displaystyle w_{i}} 1182: 1139:-th training example 1134: 1105:Rademacher complexity 646:Neural radiance field 468:Structured prediction 191:Structured prediction 63:Unsupervised learning 3363: 3335: 3312: 3308:is a Mercer kernel, 3292: 3272: 3252: 3179: 3123: 3080: 3050: 3030: 3006: 2983: 2927: 2907: 2883: 2827: 2706: 2686: 2660: 2615: 2581: 2553: 2533: 2501: 2481: 2440: 2339: 2299: 2279: 2214: 2190: 2142: 2138:, certain functions 2118: 2091: 2069: 1959: 1874: 1813: 1793: 1752: 1698: 1626: 1574: 1526: 1504: 1477: 1424: 1305: 1272: 1245: 1221: 1191: 1143: 1123: 1103:(for example, using 993:principal components 835:Statistical learning 733:Learning with humans 525:Local outlier factor 3864:Kernel-Machines Org 3754:2010MaGeo..42..487H 3548:Pattern Recognition 3499:Similarity learning 3494:Representer theorem 3353:covariance function 2903:and all choices of 2523:inner product space 2114:in the input space 2059:learning algorithms 1683: 1216:similarity function 1093:convex optimization 1082:spectral clustering 1025:Representer theorem 1017:similarity function 678:Electrochemical RAM 585:reservoir computing 316:Logistic regression 235:Supervised learning 221:Multimodal learning 196:Feature engineering 141:Generative modeling 103:Rule-based learning 98:Curriculum learning 58:Supervised learning 33:Part of a series on 3866:—community website 3371: 3357:Gaussian processes 3341: 3318: 3298: 3278: 3258: 3234: 3165: 3109: 3056: 3036: 3016: 2989: 2965: 2913: 2893: 2869: 2813: 2692: 2672: 2646: 2599:Mercer's condition 2587: 2563: 2539: 2511: 2487: 2467: 2426: 2325: 2285: 2248: 2200: 2172: 2128: 2104: 2077: 2055: 2043: 1945: 1828: 1799: 1773: 1735: 1684: 1663: 1605: 1560: 1510: 1490: 1463: 1407: 1287: 1258: 1227: 1204: 1177: 1129: 1066:Gaussian processes 246: • 161:Density estimation 3849:978-0-262-53657-8 3450:Polynomial kernel 3407:3D reconstruction 3383:covariance matrix 3344:{\displaystyle k} 3321:{\displaystyle k} 3301:{\displaystyle k} 3281:{\displaystyle k} 3261:{\displaystyle k} 3073:Theoretically, a 2992:{\displaystyle k} 2916:{\displaystyle n} 2695:{\displaystyle T} 2590:{\displaystyle k} 2288:{\displaystyle k} 2186:in another space 2063:decision boundary 1854:on tasks such as 1844:kernel perceptron 1825: 1622:labeled examples 1513:{\displaystyle y} 1436: 1317: 1299:binary classifier 1230:{\displaystyle k} 1132:{\displaystyle i} 1088:and many others. 1062:kernel perceptron 961: 960: 766:Model diagnostics 749:Human-in-the-loop 592:Boltzmann machine 505:Anomaly detection 301:Linear regression 216:Ontology learning 211:Grammar induction 186:Semantic analysis 181:Association rules 166:Anomaly detection 108:Neuro-symbolic AI 16:(Redirected from 3902: 3853: 3829: 3808: 3787:Shawe-Taylor, J. 3774: 3773: 3735: 3729: 3728: 3710: 3704: 3703: 3701: 3700: 3687:Sewell, Martin. 3684: 3678: 3677: 3656: 3650: 3649: 3639: 3629: 3605: 3599: 3598: 3596: 3580: 3568: 3562: 3561: 3543: 3537: 3536: 3534: 3533: 3519: 3380: 3378: 3377: 3372: 3370: 3350: 3348: 3347: 3342: 3327: 3325: 3324: 3319: 3307: 3305: 3304: 3299: 3287: 3285: 3284: 3279: 3267: 3265: 3264: 3259: 3243: 3241: 3240: 3235: 3230: 3229: 3224: 3215: 3214: 3209: 3194: 3193: 3174: 3172: 3171: 3166: 3161: 3160: 3155: 3140: 3139: 3134: 3119:with respect to 3118: 3116: 3115: 3110: 3108: 3107: 3096: 3087: 3065: 3063: 3062: 3057: 3045: 3043: 3042: 3037: 3025: 3023: 3022: 3017: 3015: 3014: 2998: 2996: 2995: 2990: 2974: 2972: 2971: 2966: 2961: 2960: 2942: 2941: 2922: 2920: 2919: 2914: 2902: 2900: 2899: 2894: 2892: 2891: 2878: 2876: 2875: 2870: 2865: 2864: 2859: 2844: 2843: 2838: 2822: 2820: 2819: 2814: 2806: 2805: 2796: 2795: 2783: 2782: 2777: 2768: 2767: 2762: 2749: 2744: 2728: 2723: 2701: 2699: 2698: 2693: 2681: 2679: 2678: 2673: 2655: 2653: 2652: 2647: 2645: 2637: 2610:counting measure 2596: 2594: 2593: 2588: 2572: 2570: 2569: 2564: 2562: 2561: 2548: 2546: 2545: 2540: 2527:Mercer's theorem 2520: 2518: 2517: 2512: 2510: 2509: 2496: 2494: 2493: 2488: 2476: 2474: 2473: 2468: 2466: 2465: 2464: 2435: 2433: 2432: 2427: 2422: 2421: 2420: 2407: 2406: 2385: 2365: 2364: 2352: 2335:which satisfies 2334: 2332: 2331: 2326: 2324: 2323: 2314: 2313: 2294: 2292: 2291: 2286: 2257: 2255: 2254: 2249: 2247: 2239: 2238: 2229: 2228: 2209: 2207: 2206: 2201: 2199: 2198: 2181: 2179: 2178: 2173: 2168: 2167: 2155: 2137: 2135: 2134: 2129: 2127: 2126: 2113: 2111: 2110: 2105: 2103: 2102: 2086: 2084: 2083: 2078: 2076: 2052: 2050: 2049: 2044: 2042: 2041: 2036: 2032: 2022: 2021: 2016: 2012: 1999: 1991: 1980: 1972: 1954: 1952: 1951: 1946: 1941: 1940: 1928: 1927: 1837: 1835: 1834: 1829: 1827: 1826: 1818: 1808: 1806: 1805: 1800: 1782: 1780: 1779: 1774: 1772: 1764: 1763: 1744: 1742: 1741: 1736: 1710: 1709: 1693: 1691: 1690: 1685: 1682: 1677: 1659: 1658: 1646: 1645: 1640: 1621: 1614: 1612: 1611: 1606: 1604: 1603: 1594: 1593: 1581: 1569: 1567: 1566: 1561: 1559: 1551: 1550: 1541: 1540: 1519: 1517: 1516: 1511: 1499: 1497: 1496: 1491: 1489: 1488: 1472: 1470: 1469: 1464: 1438: 1437: 1429: 1416: 1414: 1413: 1408: 1400: 1399: 1387: 1386: 1381: 1369: 1368: 1359: 1358: 1348: 1343: 1319: 1318: 1310: 1296: 1294: 1293: 1288: 1286: 1285: 1280: 1267: 1265: 1264: 1259: 1257: 1256: 1236: 1234: 1233: 1228: 1213: 1211: 1210: 1205: 1203: 1202: 1186: 1184: 1183: 1178: 1173: 1172: 1160: 1159: 1154: 1138: 1136: 1135: 1130: 1078:ridge regression 1032:kernel functions 981:pattern analysis 973:pattern analysis 965:machine learning 953: 946: 939: 900:Related articles 777:Confusion matrix 530:Isolation forest 475:Graphical models 254: 253: 206:Learning to rank 201:Feature learning 39:Machine learning 30: 29: 21: 3910: 3909: 3905: 3904: 3903: 3901: 3900: 3899: 3875: 3874: 3860: 3850: 3826: 3805: 3791:Cristianini, N. 3782: 3780:Further reading 3777: 3736: 3732: 3725: 3711: 3707: 3698: 3696: 3685: 3681: 3674: 3657: 3653: 3606: 3602: 3569: 3565: 3558: 3544: 3540: 3531: 3529: 3523:"Kernel method" 3521: 3520: 3516: 3512: 3504:Cover's theorem 3480: 3445:Kernel smoother 3431: 3429:Popular kernels 3415:cheminformatics 3391: 3366: 3364: 3361: 3360: 3336: 3333: 3332: 3313: 3310: 3309: 3293: 3290: 3289: 3273: 3270: 3269: 3253: 3250: 3249: 3225: 3220: 3219: 3210: 3205: 3204: 3186: 3182: 3180: 3177: 3176: 3156: 3151: 3150: 3135: 3130: 3129: 3124: 3121: 3120: 3097: 3092: 3091: 3083: 3081: 3078: 3077: 3051: 3048: 3047: 3031: 3028: 3027: 3010: 3009: 3007: 3004: 3003: 2984: 2981: 2980: 2956: 2952: 2937: 2933: 2928: 2925: 2924: 2908: 2905: 2904: 2887: 2886: 2884: 2881: 2880: 2860: 2855: 2854: 2839: 2834: 2833: 2828: 2825: 2824: 2801: 2797: 2791: 2787: 2778: 2773: 2772: 2763: 2758: 2757: 2745: 2734: 2724: 2713: 2707: 2704: 2703: 2687: 2684: 2683: 2661: 2658: 2657: 2641: 2633: 2616: 2613: 2612: 2582: 2579: 2578: 2557: 2556: 2554: 2551: 2550: 2534: 2531: 2530: 2505: 2504: 2502: 2499: 2498: 2482: 2479: 2478: 2460: 2459: 2455: 2441: 2438: 2437: 2416: 2415: 2411: 2399: 2398: 2381: 2357: 2356: 2348: 2340: 2337: 2336: 2319: 2318: 2309: 2308: 2300: 2297: 2296: 2280: 2277: 2276: 2265:kernel function 2243: 2234: 2233: 2224: 2223: 2215: 2212: 2211: 2210:. The function 2194: 2193: 2191: 2188: 2187: 2160: 2159: 2151: 2143: 2140: 2139: 2122: 2121: 2119: 2116: 2115: 2095: 2094: 2092: 2089: 2088: 2072: 2070: 2067: 2066: 2037: 2028: 2024: 2023: 2017: 2008: 2004: 2003: 1995: 1987: 1976: 1968: 1960: 1957: 1956: 1936: 1932: 1923: 1919: 1875: 1872: 1871: 1864: 1852:neural networks 1817: 1816: 1814: 1811: 1810: 1794: 1791: 1790: 1768: 1759: 1755: 1753: 1750: 1749: 1705: 1701: 1699: 1696: 1695: 1678: 1667: 1654: 1650: 1641: 1636: 1635: 1627: 1624: 1623: 1619: 1599: 1598: 1586: 1585: 1577: 1575: 1572: 1571: 1555: 1546: 1545: 1536: 1535: 1527: 1524: 1523: 1520:is of interest; 1505: 1502: 1501: 1481: 1480: 1478: 1475: 1474: 1428: 1427: 1425: 1422: 1421: 1392: 1391: 1382: 1377: 1376: 1364: 1360: 1354: 1350: 1344: 1333: 1309: 1308: 1306: 1303: 1302: 1281: 1276: 1275: 1273: 1270: 1269: 1249: 1248: 1246: 1243: 1242: 1222: 1219: 1218: 1198: 1194: 1192: 1189: 1188: 1168: 1164: 1155: 1150: 1149: 1144: 1141: 1140: 1124: 1121: 1120: 1113: 1001:classifications 969:kernel machines 957: 928: 927: 901: 893: 892: 853: 845: 844: 805:Kernel machines 800: 792: 791: 767: 759: 758: 739:Active learning 734: 726: 725: 694: 684: 683: 609:Diffusion model 545: 535: 534: 507: 497: 496: 470: 460: 459: 415:Factor analysis 410: 400: 399: 383: 346: 336: 335: 256: 255: 239: 238: 237: 226: 225: 131: 123: 122: 88:Online learning 53: 41: 28: 23: 22: 15: 12: 11: 5: 3908: 3898: 3897: 3892: 3887: 3873: 3872: 3867: 3859: 3858:External links 3856: 3855: 3854: 3848: 3830: 3824: 3809: 3803: 3781: 3778: 3776: 3775: 3748:(5): 487–517. 3730: 3723: 3705: 3679: 3672: 3660:Mohri, Mehryar 3651: 3600: 3594:10.1.1.17.7215 3563: 3556: 3538: 3513: 3511: 3508: 3507: 3506: 3501: 3496: 3491: 3486: 3479: 3476: 3475: 3474: 3468: 3463: 3461:String kernels 3458: 3452: 3447: 3442: 3437: 3430: 3427: 3411:bioinformatics 3390: 3387: 3369: 3340: 3317: 3297: 3277: 3257: 3233: 3228: 3223: 3218: 3213: 3208: 3203: 3200: 3197: 3192: 3189: 3185: 3164: 3159: 3154: 3149: 3146: 3143: 3138: 3133: 3128: 3106: 3103: 3100: 3095: 3090: 3086: 3055: 3035: 3013: 2988: 2964: 2959: 2955: 2951: 2948: 2945: 2940: 2936: 2932: 2912: 2890: 2868: 2863: 2858: 2853: 2850: 2847: 2842: 2837: 2832: 2812: 2809: 2804: 2800: 2794: 2790: 2786: 2781: 2776: 2771: 2766: 2761: 2756: 2753: 2748: 2743: 2740: 2737: 2733: 2727: 2722: 2719: 2716: 2712: 2691: 2671: 2668: 2665: 2644: 2640: 2636: 2632: 2629: 2626: 2623: 2620: 2586: 2560: 2538: 2508: 2486: 2463: 2458: 2454: 2451: 2448: 2445: 2425: 2419: 2414: 2410: 2405: 2402: 2397: 2394: 2391: 2388: 2384: 2380: 2377: 2374: 2371: 2368: 2363: 2360: 2355: 2351: 2347: 2344: 2322: 2317: 2312: 2307: 2304: 2284: 2246: 2242: 2237: 2232: 2227: 2222: 2219: 2197: 2171: 2166: 2163: 2158: 2154: 2150: 2147: 2125: 2101: 2098: 2075: 2040: 2035: 2031: 2027: 2020: 2015: 2011: 2007: 2002: 1998: 1994: 1990: 1986: 1983: 1979: 1975: 1971: 1967: 1964: 1944: 1939: 1935: 1931: 1926: 1922: 1918: 1915: 1912: 1909: 1906: 1903: 1900: 1897: 1894: 1891: 1888: 1885: 1882: 1879: 1863: 1860: 1840: 1839: 1824: 1821: 1798: 1784: 1771: 1767: 1762: 1758: 1746: 1734: 1731: 1728: 1725: 1722: 1719: 1716: 1713: 1708: 1704: 1681: 1676: 1673: 1670: 1666: 1662: 1657: 1653: 1649: 1644: 1639: 1634: 1631: 1616: 1602: 1597: 1592: 1589: 1584: 1580: 1558: 1554: 1549: 1544: 1539: 1534: 1531: 1521: 1509: 1487: 1484: 1462: 1459: 1456: 1453: 1450: 1447: 1444: 1441: 1435: 1432: 1406: 1403: 1398: 1395: 1390: 1385: 1380: 1375: 1372: 1367: 1363: 1357: 1353: 1347: 1342: 1339: 1336: 1332: 1328: 1325: 1322: 1316: 1313: 1284: 1279: 1255: 1252: 1226: 1201: 1197: 1176: 1171: 1167: 1163: 1158: 1153: 1148: 1128: 1112: 1109: 1043:inner products 1021:inner products 1005:feature vector 959: 958: 956: 955: 948: 941: 933: 930: 929: 926: 925: 920: 919: 918: 908: 902: 899: 898: 895: 894: 891: 890: 885: 880: 875: 870: 865: 860: 854: 851: 850: 847: 846: 843: 842: 837: 832: 827: 825:Occam learning 822: 817: 812: 807: 801: 798: 797: 794: 793: 790: 789: 784: 782:Learning curve 779: 774: 768: 765: 764: 761: 760: 757: 756: 751: 746: 741: 735: 732: 731: 728: 727: 724: 723: 722: 721: 711: 706: 701: 695: 690: 689: 686: 685: 682: 681: 675: 670: 665: 660: 659: 658: 648: 643: 642: 641: 636: 631: 626: 616: 611: 606: 601: 600: 599: 589: 588: 587: 582: 577: 572: 562: 557: 552: 546: 541: 540: 537: 536: 533: 532: 527: 522: 514: 508: 503: 502: 499: 498: 495: 494: 493: 492: 487: 482: 471: 466: 465: 462: 461: 458: 457: 452: 447: 442: 437: 432: 427: 422: 417: 411: 406: 405: 402: 401: 398: 397: 392: 387: 381: 376: 371: 363: 358: 353: 347: 342: 341: 338: 337: 334: 333: 328: 323: 318: 313: 308: 303: 298: 290: 289: 288: 283: 278: 268: 266:Decision trees 263: 257: 243:classification 233: 232: 231: 228: 227: 224: 223: 218: 213: 208: 203: 198: 193: 188: 183: 178: 173: 168: 163: 158: 153: 148: 143: 138: 136:Classification 132: 129: 128: 125: 124: 121: 120: 115: 110: 105: 100: 95: 93:Batch learning 90: 85: 80: 75: 70: 65: 60: 54: 51: 50: 47: 46: 35: 34: 26: 9: 6: 4: 3: 2: 3907: 3896: 3893: 3891: 3890:Geostatistics 3888: 3886: 3883: 3882: 3880: 3871: 3868: 3865: 3862: 3861: 3851: 3845: 3842:. MIT Press. 3841: 3840: 3835: 3834:Schölkopf, B. 3831: 3827: 3825:9781118211212 3821: 3817: 3816: 3810: 3806: 3804:9780511809682 3800: 3796: 3792: 3788: 3784: 3783: 3771: 3767: 3763: 3759: 3755: 3751: 3747: 3743: 3742: 3734: 3726: 3724:0-262-18253-X 3720: 3717:. MIT Press. 3716: 3709: 3695:on 2018-10-15 3694: 3690: 3683: 3675: 3673:9780262018258 3669: 3665: 3661: 3655: 3647: 3643: 3638: 3633: 3628: 3623: 3619: 3615: 3611: 3604: 3595: 3590: 3586: 3578: 3574: 3567: 3559: 3557:9780080949123 3553: 3549: 3542: 3528: 3524: 3518: 3514: 3505: 3502: 3500: 3497: 3495: 3492: 3490: 3487: 3485: 3482: 3481: 3473:(NNGP) kernel 3472: 3469: 3467: 3464: 3462: 3459: 3456: 3453: 3451: 3448: 3446: 3443: 3441: 3440:Graph kernels 3438: 3436: 3435:Fisher kernel 3433: 3432: 3426: 3424: 3420: 3416: 3412: 3408: 3404: 3400: 3396: 3395:geostatistics 3386: 3384: 3358: 3354: 3338: 3329: 3315: 3295: 3275: 3255: 3247: 3226: 3216: 3211: 3198: 3195: 3190: 3187: 3183: 3157: 3147: 3144: 3141: 3136: 3104: 3101: 3098: 3088: 3076: 3071: 3069: 3053: 3033: 3000: 2986: 2978: 2957: 2953: 2949: 2946: 2943: 2938: 2934: 2910: 2861: 2851: 2848: 2845: 2840: 2810: 2807: 2802: 2798: 2792: 2788: 2779: 2769: 2764: 2751: 2746: 2741: 2738: 2735: 2731: 2725: 2720: 2717: 2714: 2710: 2689: 2669: 2666: 2663: 2638: 2630: 2624: 2618: 2611: 2607: 2602: 2600: 2584: 2576: 2536: 2528: 2524: 2484: 2452: 2449: 2446: 2423: 2403: 2392: 2389: 2375: 2369: 2361: 2353: 2342: 2305: 2302: 2282: 2273: 2271: 2267: 2266: 2261: 2230: 2220: 2217: 2185: 2184:inner product 2164: 2156: 2145: 2099: 2064: 2060: 2038: 2018: 2000: 1992: 1984: 1973: 1962: 1937: 1933: 1929: 1924: 1920: 1916: 1913: 1910: 1907: 1901: 1892: 1889: 1886: 1877: 1868: 1859: 1857: 1853: 1849: 1845: 1819: 1796: 1789: 1788:sign function 1785: 1765: 1760: 1756: 1747: 1729: 1726: 1723: 1720: 1717: 1711: 1706: 1702: 1679: 1674: 1671: 1668: 1655: 1651: 1647: 1642: 1617: 1595: 1590: 1582: 1542: 1532: 1529: 1522: 1507: 1485: 1457: 1454: 1451: 1448: 1445: 1439: 1430: 1420: 1419: 1418: 1404: 1396: 1388: 1383: 1370: 1365: 1361: 1355: 1351: 1345: 1340: 1337: 1334: 1330: 1326: 1323: 1320: 1311: 1300: 1282: 1253: 1240: 1224: 1217: 1199: 1195: 1169: 1165: 1161: 1156: 1126: 1118: 1108: 1106: 1102: 1098: 1097:eigenproblems 1094: 1089: 1087: 1083: 1079: 1075: 1071: 1067: 1063: 1058: 1056: 1052: 1048: 1044: 1040: 1039:feature space 1037: 1033: 1028: 1026: 1022: 1018: 1014: 1010: 1006: 1002: 998: 994: 990: 986: 982: 978: 974: 970: 966: 954: 949: 947: 942: 940: 935: 934: 932: 931: 924: 921: 917: 914: 913: 912: 909: 907: 904: 903: 897: 896: 889: 886: 884: 881: 879: 876: 874: 871: 869: 866: 864: 861: 859: 856: 855: 849: 848: 841: 838: 836: 833: 831: 828: 826: 823: 821: 818: 816: 813: 811: 808: 806: 803: 802: 796: 795: 788: 785: 783: 780: 778: 775: 773: 770: 769: 763: 762: 755: 752: 750: 747: 745: 744:Crowdsourcing 742: 740: 737: 736: 730: 729: 720: 717: 716: 715: 712: 710: 707: 705: 702: 700: 697: 696: 693: 688: 687: 679: 676: 674: 673:Memtransistor 671: 669: 666: 664: 661: 657: 654: 653: 652: 649: 647: 644: 640: 637: 635: 632: 630: 627: 625: 622: 621: 620: 617: 615: 612: 610: 607: 605: 602: 598: 595: 594: 593: 590: 586: 583: 581: 578: 576: 573: 571: 568: 567: 566: 563: 561: 558: 556: 555:Deep learning 553: 551: 548: 547: 544: 539: 538: 531: 528: 526: 523: 521: 519: 515: 513: 510: 509: 506: 501: 500: 491: 490:Hidden Markov 488: 486: 483: 481: 478: 477: 476: 473: 472: 469: 464: 463: 456: 453: 451: 448: 446: 443: 441: 438: 436: 433: 431: 428: 426: 423: 421: 418: 416: 413: 412: 409: 404: 403: 396: 393: 391: 388: 386: 382: 380: 377: 375: 372: 370: 368: 364: 362: 359: 357: 354: 352: 349: 348: 345: 340: 339: 332: 329: 327: 324: 322: 319: 317: 314: 312: 309: 307: 304: 302: 299: 297: 295: 291: 287: 286:Random forest 284: 282: 279: 277: 274: 273: 272: 269: 267: 264: 262: 259: 258: 251: 250: 245: 244: 236: 230: 229: 222: 219: 217: 214: 212: 209: 207: 204: 202: 199: 197: 194: 192: 189: 187: 184: 182: 179: 177: 174: 172: 171:Data cleaning 169: 167: 164: 162: 159: 157: 154: 152: 149: 147: 144: 142: 139: 137: 134: 133: 127: 126: 119: 116: 114: 111: 109: 106: 104: 101: 99: 96: 94: 91: 89: 86: 84: 83:Meta-learning 81: 79: 76: 74: 71: 69: 66: 64: 61: 59: 56: 55: 49: 48: 45: 40: 37: 36: 32: 31: 19: 3838: 3814: 3794: 3745: 3739: 3733: 3714: 3708: 3697:. Retrieved 3693:the original 3682: 3663: 3654: 3627:math/0701907 3617: 3613: 3603: 3584: 3576: 3572: 3566: 3547: 3541: 3530:. Retrieved 3526: 3517: 3392: 3389:Applications 3330: 3072: 3001: 2603: 2274: 2263: 2259: 2056: 1841: 1238: 1114: 1090: 1059: 1051:kernel trick 1050: 1045:between the 1035: 1029: 1012: 1008: 997:correlations 968: 962: 830:PAC learning 517: 366: 361:Hierarchical 293: 247: 241: 18:Kernel trick 3355:as used in 3075:Gram matrix 2065:. For all 1237:, called a 1009:feature map 714:Multi-agent 651:Transformer 550:Autoencoder 306:Naive Bayes 44:data mining 3879:Categories 3699:2014-05-30 3579:: 821–837. 3532:2023-04-04 3510:References 3351:is also a 3244:, must be 2597:satisfies 1015:, i.e., a 699:Q-learning 597:Restricted 395:Mean shift 344:Clustering 321:Perceptron 249:regression 151:Clustering 146:Regression 3818:. Wiley. 3589:CiteSeerX 3581:Cited in 3145:… 3102:× 3089:∈ 3054:φ 3034:φ 2947:… 2849:… 2808:≥ 2732:∑ 2711:∑ 2667:⊂ 2619:μ 2537:φ 2485:φ 2457:⟩ 2453:⋅ 2447:⋅ 2444:⟨ 2413:⟩ 2393:φ 2376:φ 2373:⟨ 2316:→ 2306:: 2303:φ 2241:→ 2231:× 2221:: 1993:⋅ 1955:and thus 1878:φ 1823:^ 1766:∈ 1718:− 1712:∈ 1596:∈ 1553:→ 1543:× 1533:: 1446:− 1440:∈ 1434:^ 1331:∑ 1327:⁡ 1315:^ 858:ECML PKDD 840:VC theory 787:ROC curve 719:Self-play 639:DeepDream 480:Bayes net 271:Ensembles 52:Paradigms 3793:(2004). 3770:73657847 3646:88516979 3478:See also 2656:for all 2404:′ 2362:′ 2270:integral 2165:′ 2100:′ 2034:‖ 2026:‖ 2014:‖ 2006:‖ 1591:′ 1486:′ 1397:′ 1254:′ 1036:implicit 989:rankings 985:clusters 281:Boosting 130:Problems 3750:Bibcode 3399:kriging 2575:measure 1072:(PCA), 863:NeurIPS 680:(ECRAM) 634:AlexNet 276:Bagging 3846:  3822:  3801:  3768:  3721:  3670:  3644:  3591:  3554:  3527:Engati 2521:is an 2260:kernel 1417:where 1239:kernel 1055:graphs 1047:images 1013:kernel 656:Vision 512:RANSAC 390:OPTICS 385:DBSCAN 369:-means 176:AutoML 3766:S2CID 3642:S2CID 3622:arXiv 3620:(3). 3457:(RBF) 2975:(cf. 2262:or a 878:IJCAI 704:SARSA 663:Mamba 629:LeNet 624:U-Net 450:t-SNE 374:Fuzzy 351:BIRCH 3844:ISBN 3820:ISBN 3799:ISBN 3719:ISBN 3668:ISBN 3552:ISBN 3421:and 2087:and 1786:the 1748:the 888:JMLR 873:ICLR 868:ICML 754:RLHF 570:LSTM 356:CURE 42:and 3758:doi 3632:doi 2879:in 1797:sgn 1324:sgn 1107:). 1095:or 963:In 614:SOM 604:GAN 580:ESN 575:GRU 520:-NN 455:SDL 445:PGD 440:PCA 435:NMF 430:LDA 425:ICA 420:CCA 296:-NN 3881:: 3789:; 3764:. 3756:. 3746:42 3744:. 3640:. 3630:. 3618:36 3616:. 3612:. 3577:25 3575:. 3525:. 3425:. 3417:, 3413:, 3409:, 3405:, 3401:, 3397:, 3385:. 2811:0. 2601:. 2272:. 1858:. 1084:, 1080:, 1076:, 1068:, 999:, 995:, 991:, 987:, 967:, 883:ML 3852:. 3828:. 3807:. 3772:. 3760:: 3752:: 3727:. 3702:. 3676:. 3648:. 3634:: 3624:: 3597:. 3560:. 3535:. 3368:K 3339:k 3316:k 3296:k 3276:k 3256:k 3232:) 3227:j 3222:x 3217:, 3212:i 3207:x 3202:( 3199:k 3196:= 3191:j 3188:i 3184:K 3163:} 3158:n 3153:x 3148:, 3142:, 3137:1 3132:x 3127:{ 3105:n 3099:n 3094:R 3085:K 3012:X 2987:k 2963:) 2958:n 2954:c 2950:, 2944:, 2939:1 2935:c 2931:( 2911:n 2889:X 2867:) 2862:n 2857:x 2852:, 2846:, 2841:1 2836:x 2831:( 2803:j 2799:c 2793:i 2789:c 2785:) 2780:j 2775:x 2770:, 2765:i 2760:x 2755:( 2752:k 2747:n 2742:1 2739:= 2736:j 2726:n 2721:1 2718:= 2715:i 2690:T 2670:X 2664:T 2643:| 2639:T 2635:| 2631:= 2628:) 2625:T 2622:( 2585:k 2559:X 2507:V 2462:V 2450:, 2424:. 2418:V 2409:) 2401:x 2396:( 2390:, 2387:) 2383:x 2379:( 2370:= 2367:) 2359:x 2354:, 2350:x 2346:( 2343:k 2321:V 2311:X 2283:k 2245:R 2236:X 2226:X 2218:k 2196:V 2170:) 2162:x 2157:, 2153:x 2149:( 2146:k 2124:X 2097:x 2074:x 2039:2 2030:y 2019:2 2010:x 2001:+ 1997:y 1989:x 1985:= 1982:) 1978:y 1974:, 1970:x 1966:( 1963:k 1943:) 1938:2 1934:b 1930:+ 1925:2 1921:a 1917:, 1914:b 1911:, 1908:a 1905:( 1902:= 1899:) 1896:) 1893:b 1890:, 1887:a 1884:( 1881:( 1820:y 1770:R 1761:i 1757:w 1745:; 1733:} 1730:1 1727:+ 1724:, 1721:1 1715:{ 1707:i 1703:y 1680:n 1675:1 1672:= 1669:i 1665:} 1661:) 1656:i 1652:y 1648:, 1643:i 1638:x 1633:( 1630:{ 1620:n 1615:; 1601:X 1588:x 1583:, 1579:x 1557:R 1548:X 1538:X 1530:k 1508:y 1483:x 1461:} 1458:1 1455:+ 1452:, 1449:1 1443:{ 1431:y 1405:, 1402:) 1394:x 1389:, 1384:i 1379:x 1374:( 1371:k 1366:i 1362:y 1356:i 1352:w 1346:n 1341:1 1338:= 1335:i 1321:= 1312:y 1283:i 1278:x 1251:x 1225:k 1200:i 1196:w 1175:) 1170:i 1166:y 1162:, 1157:i 1152:x 1147:( 1127:i 952:e 945:t 938:v 518:k 367:k 294:k 252:) 240:( 20:)

Index

Kernel trick
Machine learning
data mining
Supervised learning
Unsupervised learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Meta-learning
Online learning
Batch learning
Curriculum learning
Rule-based learning
Neuro-symbolic AI
Neuromorphic engineering
Quantum machine learning
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering
Feature learning

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑