Dieses Bild zeigt Steinwart

Univ.-Prof. Dr. rer. nat.

Ingo Steinwart

Professor, Institutsleitung
Institut für Stochastik und Anwendungen
Lehrstuhl für Stochastik

Kontakt

+49 711 685-65388

Website

Pfaffenwaldring 57
70569 Stuttgart
Deutschland
Raum: 8.544

Sprechstunde

Nach Vereinbarung via E-Mail

Fachgebiet

  • Statistische Lerntheorie
  • Kernbasierte Lernverfahren
  • Cluster Analysis
  • Effiziente Lernverfahren für große Datenmengen
  • Verlustfunktionen
  • Lernen mit nicht .i.i.d. Daten
  • Anwendungen von Lernverfahren
  • Reproduzierende Kern-Hilberträume
Publikationen aus Puma:
  1. 2017

    1. P. Thomann, I. Steinwart, I. Blaschzyk, and M. Meister, “Spatial Decompositions for Large Scale SVMs,” in Proceedings of Machine Learning Research Volume 54: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics 2017, 2017, pp. 1329--1337.
    2. I. Steinwart, “A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths,” JOURNAL OF APPROXIMATION THEORY, vol. 215, pp. 13–27, 2017.
    3. I. Steinwart, “Representation of quasi-monotone functionals by families of separating hyperplanes,” MATHEMATISCHE NACHRICHTEN, vol. 290, no. 11–12, pp. 1859–1883, 2017.
    4. I. Steinwart and P. Thomann, “liquidSVM: A Fast and Versatile SVM Package,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
    5. I. Steinwart, B. K. Sriperumbudur, and P. Thomann, “Adaptive Clustering Using Kernel Density Estimators,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
    6. I. Steinwart, “A Short Note on the Comparison of Interpolation Widths, Wntropy Numbers, and Kolmogorov Widths,” J. Approx. Theory, vol. 215, pp. 13--27, 2017.
    7. I. Steinwart, “Convergence Types and Rates in Generic Karhunen-Loève Expansions with Applications to Sample Paths Properties,” ArXiv e-prints, 2017.
    8. H. Hang and I. Steinwart, “A Bernstein-type Inequality for Some Mixing Processes and Dynamical Systems with an Application to Learning,” Ann. Statist., vol. 45, pp. 708--743, 2017.
    9. S. Fischer and I. Steinwart, “Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
    10. M. Farooq and I. Steinwart, “Learning Rates for Kernel-Based Expectile Regression,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2017.
    11. M. Farooq and I. Steinwart, “An SVM-like Approach for Expectile Regression,” Comput. Statist. Data Anal., vol. 109, pp. 159--181, 2017.
  2. January 2016

    1. I. Steinwart, “Simons’ SVM: A fast SVM toolbox,” no. \dots, Jan-2016.
  3. 2016

    1. P. Thomann, I. Steinwart, I. Blaschzyk, and M. Meister, “Spatial Decompositions for Large Scale SVMs.,” CoRR, vol. abs/1612.00374, 2016.
    2. I. Steinwart, P. Thomann, and N. Schmid, “Learning with Hierarchical Gaussian Kernels,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
    3. M. Meister and I. Steinwart, “Optimal Learning Rates for Localized SVMs,” J. Mach. Learn. Res., vol. 17, pp. 1–44, 2016.
    4. H. Hang, I. Steinwart, Y. Feng, and J. A. K. Suykens, “Kernel Density Estimation for Dynamical Systems,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
    5. H. Hang, Y. Feng, I. Steinwart, and J. A. K. Suykens, “Learning theory estimates with observations from general stationary stochastic processes,” Neural Computation, vol. 28, pp. 2853--2889, 2016.
    6. I. Blaschzyk and I. Steinwart, “Improved Classification Rates under Refined Margin Conditions,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2016.
  4. 2015

    1. P. Thomann, I. Steinwart, and N. Schmid, “Towards an Axiomatic Approach to Hierarchical Clustering of Measures,” J. Mach. Learn. Res., vol. 16, pp. 1949--2002, 2015.
    2. I. Steinwart, “Fully Adaptive Density-Based Clustering,” Ann. Statist., vol. 43, pp. 2132--2167, 2015.
    3. I. Steinwart, “Measuring the capacity of sets of functions in the analysis of ERM,” in Festschrift in Honor of Alexey Chervonenkis, A. Gammerman and V. Vovk, Eds. Berlin: Springer, 2015, pp. 223--239.
  5. 2014

    1. I. Steinwart, “Convergence Types and Rates  in Generic Karhunen-Loève Expansions with Applications to Sample Path Properties,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2014–007, 2014.
    2. I. Steinwart, C. Pasin, R. Williamson, and S. Zhang, “Elicitation and Identification of Properties,” in JMLR Workshop and Conference Proceedings Volume 35: Proceedings of the 27th Conference on Learning Theory 2014, 2014, pp. 482--526.
    3. H. Hang and I. Steinwart, “Fast learning from α-mixing observations.,” J. Multivariate Analysis, vol. 127, pp. 184–199, 2014.
    4. H. Hang and I. Steinwart, “Fast Learning from $\alpha$-mixing Observations,” J. Multivariate Anal., vol. 127, pp. 184--199, 2014.
  6. 2013

    1. I. Steinwart, “Supplement A to ``Fully Adaptive Density-Based Clustering’’,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2013–016, 2013.
    2. I. Steinwart, “Supplement B to ``Fully Adaptive Density-Based Clustering’’,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2013.
    3. I. Steinwart, “Some Remarks on the Statistical Analysis of SVMs and Related Methods,” in Empirical Inference -- Festschrift in Honor of Vladimir N. Vapnik, B. Schölkopf, Z. Luo, and V. Vovk, Eds. Berlin: Springer, 2013, pp. 25–36.
    4. M. Eberts and I. Steinwart, “Optimal regression rates for SVMs using Gaussian kernels,” Electron. J. Stat., vol. 7, pp. 1--42, 2013.
  7. 2012

    1. I. Steinwart and C. Scovel, “Mercer’s Theorem on General Domains: on the Interaction between Measures, Kernels, and RKHSs,” Constr. Approx., vol. 35, pp. 363--417, 2012.
    2. B. K. Sriperumbudur and I. Steinwart, “Consistency and Rates for Clustering with DBSCAN,” in JMLR Workshop and Conference Proceedings Volume 22: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics 2012, 2012, pp. 1090–1098.
    3. L. Bornn, M. Anghel, and I. Steinwart, “Forecasting with Historical Data or Process Knowledge under Misspecification: A Comparison,” Fakultät für Mathematik und Physik, Universität Stuttgart, 2012.
  8. 2011

    1. I. Steinwart and A. Christmann, “Estimating Conditional Quantiles with the Help of the Pinball Loss,” Bernoulli, vol. 17, pp. 211--225, 2011.
    2. I. Steinwart, D. Hush, and C. Scovel, “Training SVMs without offset,” J. Mach. Learn. Res., vol. 12, pp. 141--202, 2011.
    3. I. Steinwart, “Adaptive Density Level Set Clustering,” in JMLR Workshop and Conference Proceedings Volume 19: Proceedings of the 24th Conference on Learning Theory 2011, 2011, pp. 703--738.
    4. M. Eberts and I. Steinwart, “Optimal learning rates for least squares SVMs using  Gaussian kernels,” in Advances in Neural Information Processing Systems 24, 2011, pp. 1539--1547.
  9. 2010

    1. I. Steinwart, J. Theiler, and D. Llamocca, “Using support vector machines for anomalous change detection,” in IEEE Geoscience and Remote Sensing Society and the IGARSS 2010, 2010, pp. 3732--3735.
    2. C. Scovel, D. Hush, I. Steinwart, and J. Theiler, “Radial kernels and their reproducing kernel Hilbert spaces,” J. Complexity, vol. 26, pp. 641–660, 2010.
    3. A. Christmann and I. Steinwart, “Universal Kernels on Non-Standard Input Spaces,” in Advances in Neural Information Processing Systems 23, 2010, pp. 406--414.
  10. 2009

    1. I. Steinwart, “Oracle inequalities for support vector machines that are based on random entropy numbers,” J. Complexity, vol. 25, no. 5, pp. 437–454, 2009.
    2. I. Steinwart, D. Hush, and C. Scovel, “Learning from dependent observations,” J. Multivariate Anal., vol. 100, pp. 175--194, 2009.
    3. I. Steinwart and A. Christmann, “Fast Learning from Non-i.i.d. Observations,” in Advances in Neural Information Processing Systems 22, 2009, pp. 1768--1776.
    4. I. Steinwart and A. Christmann, “Sparsity of SVMs that use the $\epsilon$-insensitive loss,” in Advances in Neural Information Processing Systems 21, 2009, pp. 1569--1576.
    5. I. Steinwart, “Two oracle inequalities for regularized boosting classifiers,” Stat. Interface, vol. 2, pp. 271--284, 2009.
    6. I. Steinwart, “Oracle inequalities for SVMs that are Based on Random Entropy Numbers,” J. Complexity, vol. 25, pp. 437--454, 2009.
    7. I. Steinwart and M. Anghel, “An SVM approach for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise,” Ann. Statist., vol. 37, pp. 841--875, 2009.
    8. I. Steinwart, D. Hush, and C. Scovel, “Optimal Rates for Regularized Least Squares Regression,” in Proceedings of the 22nd Annual Conference on Learning Theory, 2009, pp. 79--93.
    9. A. Christmann, A. van Messem, and I. Steinwart, “On consistency and robustness properties of support vector machines for heavy-tailed distributions,” Stat. Interface, vol. 2, pp. 311--327, 2009.
  11. 2008

    1. I. Steinwart and A. Christmann, “Sparsity of SVMs that use the epsilon-insensitive loss.,” in NIPS, 2008, pp. 1569–1576.
    2. I. Steinwart and A. Christmann, “How SVMs can estimate quantiles and the median,” in Advances in Neural Information Processing Systems 20, Cambridge, MA, 2008, pp. 305--312.
    3. I. Steinwart and A. Christmann, Support Vector Machines. New York: Springer, 2008.
    4. A. Christmann and I. Steinwart, “Consistency of kernel based quantile regression,” Appl. Stoch. Models Bus. Ind., vol. 24, pp. 171--183, 2008.
  12. 2007

    1. I. Steinwart, D. Hush, and C. Scovel, “An Oracle Inequality for Clipped Regularized Risk Minimizers,” in Advances in Neural Information Processing Systems 19, Cambridge, MA, 2007, pp. 1321--1328.
    2. I. Steinwart and C. Scovel, “Fast rates for support vector machines using Gaussian kernels,” Ann. Statist., vol. 35, pp. 575--607, 2007.
    3. I. Steinwart, “How to compare different loss functions,” Constr. Approx., vol. 26, pp. 225--287, 2007.
    4. C. Scovel, D. Hush, and I. Steinwart, “Approximate duality,” J. Optim. Theory Appl., vol. 135, pp. 429--443, 2007.
    5. N. List, D. Hush, C. Scovel, and I. Steinwart, “Gaps in support vector optimization,” in Proceedings of the 20th Conference on Learning Theory, New York, 2007, pp. 336--348.
    6. D. Hush, C. Scovel, and I. Steinwart, “Stability of unstable learning algorithms,” Mach. Learn., vol. 67, pp. 197--206, 2007.
    7. A. Christmann and I. Steinwart, “How SVMs can estimate quantiles and the median.,” in NIPS, 2007, pp. 305–312.
    8. A. Christmann, I. Steinwart, and M. Hubert, “Robust learning from bites for data mining,” Comput. Statist. Data Anal., vol. 52, pp. 347--361, 2007.
    9. A. Christmann and I. Steinwart, “Consistency and robustness of kernel-based regression in convex risk minimization,” Bernoulli, vol. 13, pp. 799--819, 2007.
  13. 2006

    1. I. Steinwart, D. R. Hush, and C. Scovel, “An Oracle Inequality for Clipped Regularized Risk Minimizers.,” in NIPS, 2006, pp. 1321–1328.
    2. I. Steinwart, D. Hush, and C. Scovel, “A new concentration result for regularized risk minimizers,” in High Dimensional Probability IV, Beachwood, OH, 2006, pp. 260--275.
    3. I. Steinwart, D. Hush, and C. Scovel, “Function classes that approximate the Bayes risk,” in Proceedings of the 19th Annual Conference on Learning Theory, New York, 2006, pp. 79--93.
    4. I. Steinwart, D. Hush, and C. Scovel, “An explicit Description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels,” IEEE Trans. Inform. Theory, vol. 52, pp. 4635--4643, 2006.
    5. D. Hush, P. Kelly, C. Scovel, and I. Steinwart, “QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines,” J. Mach. Learn. Res., vol. 7, pp. 733--769, 2006.
  14. 2005

    1. I. Steinwart, “Consistency of support vector machines and other regularized kernel classifiers.,” IEEE Trans. Information Theory, vol. 51, no. 1, pp. 128–142, 2005.
    2. I. Steinwart, D. Hush, and C. Scovel, “Density Level Detection is Classification,” in Advances in Neural Information Processing Systems 17, Cambridge, MA, 2005, pp. 1337--1344.
    3. I. Steinwart, D. Hush, and C. Scovel, “A classification framework for anomaly detection,” J. Mach. Learn. Res., vol. 6, pp. 211--232, 2005.
    4. I. Steinwart and C. Scovel, “Fast Rates for Support Vector Machines,” in Proceedings of the 18th Annual Conference on Learning Theory, New York, 2005, pp. 279--294.
    5. I. Steinwart, “Consistency of support vector machines and other regularized kernel machines,” IEEE Trans. Inform. Theory, vol. 51, pp. 128--142, 2005.
    6. C. Scovel, D. Hush, and I. Steinwart, “Learning Rates for Density Level Detection,” Anal. Appl., vol. 3, pp. 356--371, 2005.
    7. D. M. Patterson and L. Steinwart, “Classroom technology: assisting faculty in finding weapons of mass instruction.,” in SIGUCCS, 2005, pp. 310–311.
    8. D. Hush, P. Kelly, C. Scovel, and I. Steinwart, “Provably fast algorithms for anomaly detection,” in International Workshop on Data Mining Methods for Anomaly Detection at KDD 2005, 2005, pp. 27--31.
  15. 2004

    1. I. Steinwart, D. R. Hush, and C. Scovel, “Density Level Detection is Classification.,” in NIPS, 2004, pp. 1337–1344.
    2. I. Steinwart and C. Scovel, “Fast Rates to Bayes for Kernel Machines.,” in NIPS, 2004, pp. 1345–1352.
    3. I. Steinwart, “Entropy of convex hulls---some Lorentz norm results,” J. Approx. Theory, vol. 128, pp. 42--52, 2004.
    4. I. Steinwart and C. Scovel, “When do support vector machines learn fast?,” in 16th International Symposium on Mathematical Theory of Networks and Systems, 2004.
    5. I. Steinwart, “Sparseness of support vector machines---some asymptotically sharp bounds,” in Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004, pp. 1069--1076.
    6. A. Christmann and I. Steinwart, “On robustness properties of convex risk minimization methods for pattern recognition,” J. Mach. Learn. Res., vol. 5, pp. 1007--1034, 2004.
  16. 2003

    1. I. Steinwart, “On the Optimal Parameter Choice for v-Support Vector Machines.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 10, pp. 1274–1284, 2003.
    2. I. Steinwart, “Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds.,” in NIPS, 2003, pp. 1069–1076.
    3. I. Steinwart, “On the optimal parameter choice for $\nu$-support vector machines,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1274--1284, 2003.
    4. I. Steinwart, “Entropy numbers of convex hulls and an application to learning algorithms,” Arch. Math., vol. 80, pp. 310--318, 2003.
    5. I. Steinwart, “Sparseness of support vector machines,” J. Mach. Learn. Res., vol. 4, pp. 1071--1105, 2003.
    6. K. Mittmann and I. Steinwart, “On the existence of continuous modifications of vector-valued random fields,” Georgian Math. J., vol. 10, pp. 311--317, 2003.
  17. 2002

    1. I. Steinwart, “Support vector machines are universally consistent,” J. Complexity, vol. 18, pp. 768--791, 2002.
    2. I. Steinwart, “Which data--dependent bounds are suitable for SVM’s?,” 2002.
    3. J. Creutzig and I. Steinwart, “Metric entropy of convex hulls in type $p$ spaces---the critical case,” Proc. Amer. Math. Soc., vol. 130, pp. 733--743, 2002.
  18. 2001

    1. I. Steinwart, “On the influence of the kernel on the consistency of support vector machines,” J. Mach. Learn. Res., vol. 2, pp. 67--93, 2001.
  19. 2000

    1. J. M. Yohe and M. C. Steinwart, “Faculty Response to Classroom Use of E-Technology.,” in SIGUCCS, 2000, pp. 326–329.
    2. I. Steinwart, “Entropy of $C(K)$-valued operators,” J. Approx. Theory, vol. 103, pp. 302--328, 2000.
    3. I. Steinwart, “Entropy of $C(K)$-valued operators and some applications,” PhD dissertation, Friedrich-Schiller Universität Jena, Fakultät für Mathematik und Informatik, 2000.
  20. 1997

    1. M. C. Steinwart, “Nightmare on Elm Street to It’s a Wonderful Life: a port-per-pillow networking experience at Valparaiso University.,” in SIGUCCS, 1997, pp. 291–294.
  21. 1996

    1. J. E. Hicks and M. C. Steinwart, “The Web as a campus wide information system: tunnel of love or house of mirrors.,” in SIGUCCS, 1996, pp. 67–68.

Education

02/2000 Doctorate (Dr. rer. nat.) in Mathematics, Friedrich-Schiller-University, Jena
03/1997 Diploma in Mathematics, Carl-von-Ossietzky University, Oldenburg

Appointments

07/2017 — Faculty Member, International Max Planck Research School for Intelligent Systems, Stuttgart/Tübingen
04/2010 — Full Professor, Institute for Stochastics and Applications, Department of Mathematics, University of Stuttgart
01/2010 — 06/2011 Associate Adjunct Professor, Jack Baskin School of Engineering, Department of Computer Science, University of California, Santa Cruz
07/2008 — 04/2010 Scientist Level 4, CCS-3, Los Alamos National Laboratory
03/2003 — 04/2010 Technical Staff Member, CCS-3, Los Alamos National Laboratory
03/2002 — 09/2002 Visiting Scientist, Johannes-Gutenberg University, Mainz
03/2000 — 03/2003 Scientific Staff Member, Friedrich-Schiller-University, Jena
04/1997 — 02/2000 Stipendiary, DFG graduate college “Analytic and Stochastic Structures and Systems”, Friedrich-Schiller-University, Jena

Administrative Services

10/2010 — Member of the Senate Committee for Organisation, University Stuttgart
04/2011 — 10/2012 Vice Dean for Mathematics, Faculty of Mathematics and Physics, University of Stuttgart

Editorial Services

01/2013 — Associate Editor, Journal of Complexity
12/2008 — Action Editor (Associate Editor), Journal of Machine Learning Research
01/2010 — 12/2012 Associate Editor, Annals of Statistics

Program Responsibilities at Conferences

Chair COLT 2013
Program Committee NIPS 2008, 2011
Program Committee COLT 2006, 2008, 2009, 2011, 2012, 2015

2018

M. Farooq and I. Steinwart, Learning rates for kernel-based expectile regression, Mach. Learn., 2018. [ final | preprint.pdf ]

H. Hang, I. Steinwart, Y. Feng, and J. Suykens, Kernel density estimation for dynamical systems, J. Mach. Learn. Res., vol. 19, pp. 1-49, 2018. [ final | preprint.pdf ]

I. Steinwart, Convergence types and rates in generic Karhunen-Loève expansions with applications to sample path properties, Potential Anal., vol. -, pp. -, 2018. [ final | preprint.pdf ]

I. Blaschzyk and I. Steinwart, Improved classification rates under refined margin conditions, Electron. J. Stat., vol. 12, pp. 793-823, 2018. [ final | preprint.pdf ]

 

liquidCluster: A Fast and Automated Density-Based Clustering Package

LiquidCluster estimates the cluster tree with the help of some density estimators. Key features are its:
  • automated hyper-parameter selection procedure
  • speed
The currently available command line version is very similar to the interface of liquidSVM.

On Linux and Mac on the terminal liquidCluster can be used in the following way:

wget www.isa.uni-stuttgart.de/software/liquidCluster.tar.gz
tar xzf liquidCluster.tar.gz
cd liquidCluster
make cluster
cd scripts
./cluster.sh bananas-5-2d
Results are written to the ./results folder. 

The terminal version for Linux/OS X can be downloaded here: liquidCluster.tar.gz

 

liquidSVM: A Fast and Versatile SVM Package

News

February 23rd 2017: version 1.2

  • The package has been renamed to liquidSVM (formerly simons-svm).
  • There is now a paper with many benchmarks on the arXiv.
  • The R-package has reached version 1.0.0.

June 8th 2016: version 1.1.

Changes include:

  • Large datasets (tested up to 10 millions of samples) can be trained and tested.
  • A new recursive algorithm expedites the partitioning of data.
  • Many bugs were fixed.

General Information

Support vector machines (SVMs) and related kernel-based learning algorithms are a well-known class of machine learning algorithms, for non-parametric classification and regression. liquidSVM is an implementation of SVMs whose key features are:

  • fully integrated hyper-parameter selection,
  • extreme speed on both small and large data sets,
  • Bindings for RPythonMATLAB / OctaveJava, and Spark,
  • inclusion of a variety of different learning scenarios:
    • least-squares, quantile, and expectile regression
    • multi-class classification, ROC, and Neyman-Pearson learning
  • full flexibility for experts.

For questions and comments just contact us via mail. There you also can ask to be registerd to our mailing list.

liquidSVM is licensed under AGPL 3.0. In case you need another license, please contact me.

Command Line interface

Installation instructions for the command line versions.

Terminal version for Linux/OS X liquidSVM.tar.gz
Terminal version for Windows (64bit) avx2: liquidSVM.zip
  avx: liquidSVM.zip
  sse2: liquidSVM.zip
Previous versions v1.1 (June 2016), v1.0 (January 2016)

On Linux and Mac on the terminal liquidSVM can be used in the following way:

wget www.isa.uni-stuttgart.de/software/liquidSVM.tar.gz
tar xzf liquidSVM.tar.gz
cd liquidSVM
make all
scripts/mc-svm.sh banana-mc 1 2

R

Read the demo vignette for a tutorial on installing liquidSVM-package and how to use it and the documentation vignette for more advanced installation options and usage.

An easy usage is:

install.packages("liquidSVM")
library(liquidSVM)
banana <- liquidData('banana-mc')
model <- mcSVM( Y~. , banana$train, display=1, threads=2)
result <- test(model, banana$test)
errors(result)

Python

Read the demo notebook for a tutorial on installing liquidSVM-package and how to use it and the homepage for more advanced installation options and usage.

To install use:

pip install --user liquidSVM

and then in Python you can use it e.g. like:

from liquidSVM import *
banana = LiquidData('banana-mc')
model = mcSVM(banana.train, display=1, threads=2)
result, err = model.test(banana.test)

MATLAB / Octave

The MATLAB bindings are currently getting a better interface, and this is a preview version.

It does currently not work on Windows.

For installation download the Toolbox liquidSVM.mltbx and install it in MATLAB by double clicking it. To compile and add paths issue:

makeliquidSVM native

Then you can use it like:

banana = liquidData('banana-mc');
model = svm_mc(banana.train, 'DISPLAY', 1, 'THREADS', 2);
[result, err] = model.test(banana.test);

Most of the code also works in if you use liquidSVM-octave.zip.

Java

The main homepage is here. For installation download liquidSVM-java.zip and unzip it. The classes are all in package de.uni_stuttgart.isa.liquidsvm and an easy example is:

LiquidData banana = new LiquidData("banana-mc");
SVM model = new MC(banana.train, new Config().display(1).threads(2));
ResultAndErrors result = model.test(banana.test);

If this is implemented in the file Example.java this can be compiled and run using

# if you want to compile the JNI-native library:
make lib
# compile your Java-Code
javac -classpath liquidSVM.jar Example.java
# and run it
java -Djava.library.path=. -cp .:liquidSVM.jar Example

Spark

This is a preview version, see Spark for more details. Download liquidSVM-spark.zip and unzip it. Assume you have Spark installed in $SPARK_HOME you can issue:

make lib
export LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH
$SPARK_HOME/bin/spark-submit \
  --class de.uni_stuttgart.isa.liquidsvm.spark.App \
  liquidSVM-spark.jar banana-mc

If you have configured Spark to be used on a cluster with Hadoop use:

hdfs dfs -put data/covtype-full.train.csv data/covtype-full.test.csv .
make lib
$SPARK_HOME/bin/spark-submit --files ../libliquidsvm.so \
  --conf spark.executor.extraLibraryPath=. \
  --conf spark.driver.extraLibraryPath=. \
  --class de.uni_stuttgart.isa.liquidsvm.spark.App \
  --num-executors 14 liquidSVM-spark.jar covtype-full

Extra Datasets for the Demo

covertype data set with 35.090 training and 34.910 test samples

covertype data set with 522.909 training and 58.103 test samples

Both datasets were compiled from LIBSVM's version of the covertype dataset, which in turn was taken from the UCI repository and preprocessed as in [RC02a]. Copyright for this dataset is by Jock A. Blackard and Colorado State University.

Citation

If you use liquidSVM, please cite it as:

I. Steinwart and P. Thomann. liquidSVM: A fast and versatile SVM package. ArXiv e-prints 1702.06899, February 2017.