►Cshark::AbstractBudgetMaintenanceStrategy< InputType > | This is the abstract interface for any budget maintenance strategy |
Cshark::MergeBudgetMaintenanceStrategy< InputType > | Budget maintenance strategy that merges two vectors |
Cshark::ProjectBudgetMaintenanceStrategy< InputType > | Budget maintenance strategy that projects a vector |
Cshark::RemoveBudgetMaintenanceStrategy< InputType > | Budget maintenance strategy that removes a vector |
►Cshark::AbstractBudgetMaintenanceStrategy< RealVector > | |
Cshark::MergeBudgetMaintenanceStrategy< RealVector > | Budget maintenance strategy merging vectors |
Cshark::ProjectBudgetMaintenanceStrategy< RealVector > | Budget maintenance strategy that projects a vector |
►Cshark::AbstractConstraintHandler< SearchPointType > | Implements the base class for constraint handling |
Cshark::BoxConstraintHandler< SearchPointType > | |
►Cshark::AbstractConstraintHandler< Vector > | |
Cshark::BoxConstraintHandler< Vector > | |
►Cshark::AbstractDistribution | Abstract class for distributions |
Cshark::Normal< RngType > | Implements a univariate normal (Gaussian) distribution |
Cshark::Uniform< RngType > | Implements a continuous uniform distribution |
►Cshark::AbstractNearestNeighbors< InputType, LabelType > | Interface for Nearest Neighbor queries |
Cshark::SimpleNearestNeighbors< InputType, LabelType > | Brute force optimized nearest neighbor implementation |
Cshark::TreeNearestNeighbors< InputType, LabelType > | Nearest Neighbors implementation using binary trees |
Cshark::AbstractStoppingCriterion< ResultSetT > | Base class for stopping criteria of optimization algorithms |
►Cshark::AbstractStoppingCriterion< ResultSet > | |
Cshark::MaxIterations< ResultSet > | This stopping criterion stops after a fixed number of iterations |
►Cshark::AbstractStoppingCriterion< SingleObjectiveResultSet< PointType > > | |
Cshark::TrainingError< PointType > | This stopping criterion tracks the improvement of the error function of the training error over an interval of iterations |
Cshark::TrainingProgress< PointType > | This stopping criterion tracks the improvement of the training error over an interval of iterations |
►Cshark::AbstractStoppingCriterion< SingleObjectiveResultSet< RealVector > > | |
Cshark::ValidatedStoppingCriterion | Given the current Result set of the optimizer, calculates the validation error using a validation function and hands the results over to the underlying stopping criterion |
►Cshark::AbstractStoppingCriterion< ValidatedSingleObjectiveResultSet< PointType > > | |
Cshark::GeneralizationLoss< PointType > | The generalization loss calculates the relative increase of the validation error compared to the minimum training error |
Cshark::GeneralizationQuotient< PointType > | SStopping criterion monitoring the quotient of generalization loss and training progress |
Cshark::AdditiveEpsilonIndicator | Given a reference front R and an approximation F, calculates the additive approximation quality of F |
Cshark::BarsAndStripes | Generates the Bars-And-Stripes problem. In this problem, a 4x4 image has either rows or columns of the same value |
►Cbase | |
Cshark::MultiSequenceIterator< SequenceContainer > | Iterator which iterates of the elements of a nested sequence |
Cshark::BaseFastNonDominatedSort< Extractor > | Implements the well-known non-dominated sorting algorithm |
Cshark::BaseRng< RNG > | Collection of different variate generators for different distributions |
►Cshark::statistics::BaseStatisticsObject | Base class for all Statistic Objects to be used with Statistics |
Cshark::statistics::FractionMissing | For a vector of points computes for every dimension the fraction of missing values |
Cshark::statistics::Mean | For a vector of points computes for every dimension the mean |
►Cshark::statistics::Quantile | For a vector of points computes for every dimension the p-quantile |
Cshark::statistics::LowerQuantile | For a vector of points computes for every dimension the 25%-quantile |
Cshark::statistics::Median | For a vector of points computes for every dimension the median |
Cshark::statistics::UpperQuantile | For a vector of points computes for every dimension the 75%-quantile |
Cshark::statistics::Variance | For a vector of points computes for every dimension the variance |
Cshark::BiasSolver< Matrix > | |
Cshark::BiasSolverSimplex< Matrix > | |
►Cbidirectional_iterator_base | |
Cshark::blas::diagonal_matrix< VectorType >::const_row_iterator | |
►Cshark::BinaryTree< InputT > | Super class of binary space-partitioning trees |
Cshark::KDTree< InputT > | KD-tree, a binary space-partitioning tree |
►Cshark::BinaryTree< Container::value_type > | |
Cshark::KHCTree< Container, CuttingAccuracy > | KHC-tree, a binary space-partitioning tree |
►Cshark::BinaryTree< VectorType > | |
Cshark::LCTree< VectorType, CuttingAccuracy > | LC-tree, a binary space-partitioning tree |
Cshark::BitflipMutator | Bitflip mutation operator |
Cshark::blas::Blocking< Matrix > | Partitions the matrix in 4 blocks defined by one splitting point (i,j) |
Cshark::BlockMatrix2x2< Matrix > | SVM regression matrix |
Cshark::BoundingBoxComputer< Set > | Calculates bounding boxes |
Cshark::BoxConstrainedProblem< SVMProblem > | Quadratic program with box constraints |
►Cshark::BoxConstrainedProblem< Problem > | |
Cshark::BoxConstrainedShrinkingProblem< Problem > | |
Cshark::BoxedSVMProblem< MatrixT > | Boxed problem for alpha in [lower,upper]^n and equality constraints |
Cshark::CachedMatrix< Matrix > | Efficient quadratic matrix cache |
Cshark::CanBeCalled< Functor, Argument > | Detects whether Functor(Argument) can be called |
Cshark::CMAChromosome | Models a CMAChromosomeof the elitist (MO-)CMA-ES that encodes strategy parameters |
Cshark::blas::const_expression< compressed_matrix< T, I > > | |
Cshark::blas::const_expression< compressed_matrix< T, I > const > | |
Cshark::blas::const_expression< compressed_vector< T, I > > | |
Cshark::blas::const_expression< compressed_vector< T, I > const > | |
Cshark::blas::const_expression< matrix< T, Orientation > > | |
Cshark::blas::const_expression< matrix< T, Orientation > const > | |
Cshark::blas::const_expression< triangular_matrix< T, Orientation, TriangularType > > | |
Cshark::blas::const_expression< triangular_matrix< T, Orientation, TriangularType > const > | |
Cshark::blas::const_expression< vector< T > > | |
Cshark::blas::const_expression< vector< T > const > | |
Cshark::ConstProxyReference< T > | Sets the type of ProxxyReference |
Cshark::CSVMProblem< MatrixT > | Problem formulation for binary C-SVM problems |
Cshark::CVFolds< DatasetTypeT > | |
Cshark::CVFolds< DatasetType > | |
Cshark::CVFolds< LabeledData< InputType, unsigned int > > | |
Cshark::DataDistribution< InputType > | A DataDistribution defines an unsupervised learning problem |
►Cshark::DataDistribution< RealVector > | |
Cshark::ImagePatches | Given a set of images, draws a set of image patches of a given size |
Cshark::NormalDistributedPoints | Generates a set of normally distributed points |
Cshark::DataView< DatasetType > | Constant time Element-Lookup for Datasets |
Cshark::DataView< const shark::LabeledData > | |
Cshark::DataView< LabeledData< InputType, LabelType > const > | |
Cshark::DataView< shark::Data< InputType > const > | |
Cshark::DataView< shark::Data< LabelType > const > | |
Cshark::DiffGeometric_distribution< IntType, RealType > | Implements a diff geometric distribution |
Cshark::Dirichlet_distribution< RealType > | Dirichlet distribution |
►CDiscreteKernel | |
CGaussianTaskKernel< InputTypeT > | Special "Gaussian-like" kernel function on tasks |
Cshark::tags::DiscreteSpace | A Tag for EnumerationSpaces. It tells the Functions, that the space is discrete and can be enumerated |
Cshark::DistantModes | Creates a set of pattern (each later representing a mode) which than are randomly perturbed to create the data set. The dataset was introduced in Desjardins et al. (2010) (Parallel Tempering for training restricted Boltzmann machines, AISTATS 2010) |
►Cshark::DistTrainerContainer | Container for known distribution trainers |
Cshark::GenericDistTrainer | |
Cshark::Divide | Transformation function dividing the elements in a dataset by a scalar or component-wise by values stores in a vector |
Cshark::DoublePole | |
Cshark::WilcoxonRankSumTest::Element | Stores information about an observation |
Cshark::ElitistSelection< Extractor > | Survival selection to find the next parent set |
Cshark::Energy< RBM > | The Energy function determining the Gibbs distribution of an RBM |
Cshark::EnergyStoringTemperedMarkovChain< Operator > | Implements parallel tempering but also stores additional statistics on the energy differences |
Cshark::QpSparseArray< QpFloatType >::Entry | Non-default (non-zero) array entry |
Cshark::EPTournamentSelection< Extractor > | Survival and mating selection to find the next parent set |
Cshark::Erlang_distribution< RealType > | Implements an Erlang distribution |
Cshark::QpMcBoxDecomp< Matrix >::Example | Data structure describing one training example |
Cshark::QpMcSimplexDecomp< Matrix >::Example | Data structure describing one training example |
Cshark::ExampleModifiedKernelMatrix< InputType, CacheType > | |
►Cstd::exception | STL class |
►Cshark::Exception | Top-level exception class of the shark library |
Cshark::TypedFeatureNotAvailableException< Feature > | Exception indicating the attempt to use a feature which is not supported |
Cshark::FFNetStructures | |
Cshark::FitnessExtractor | |
Cshark::Individual< PointType, FitnessTypeT, Chromosome >::FitnessOrdering | Ordering relation by the fitness of the individuals(only single objective) |
Cshark::Gamma_distribution< RealType > | Gamma distribution |
Cshark::GaussianKernelMatrix< T, CacheType > | Efficient special case if the kernel is Gaussian and the inputs are sparse vectors |
Cshark::GeneralQuadraticProblem< MatrixT > | Most gneral problem formualtion, needs to be configured by hand |
Cshark::GibbsOperator< RBMType > | Implements Block Gibbs Sampling related transition operators for various temperatures |
Cshark::HMGSelectionCriterion | |
Cshark::HyperGeometric_distribution< IntType, RealType > | Hypergeometric distribution |
Cshark::HypervolumeApproximator< Rng > | Implements an FPRAS for approximating the volume of a set of high-dimensional objects |
Cshark::HypervolumeCalculator | Implementation of the exact hypervolume calculation in m dimensions |
Cshark::HypervolumeIndicator | Calculates the hypervolume covered by a front of non-dominated points |
Cshark::IdentityFitnessExtractor | Functor that returns its argument without conversion |
Cshark::LeastContributorApproximator< Rng, ExactHypervolume >::IdentityFitnessExtractor | Returns the supplied argument |
Cshark::ImageInformation | Stores name and size of image externally |
►Cshark::INameable | This class is an interface for all objects which can have a name |
►Cshark::AbstractClustering< RealVector > | |
Cshark::Centroids | Clusters defined by centroids |
►Cshark::AbstractCost< LabelType, LabelType > | |
►Cshark::AbstractLoss< LabelType, LabelType > | |
Cshark::ZeroOneLoss< LabelType, OutputType > | 0-1-loss for classification |
►Cshark::AbstractCost< LabelType, OutputType > | |
►Cshark::AbstractLoss< LabelType, OutputType > | |
Cshark::SquaredLoss< OutputType, LabelType > | Squared loss for regression and classification |
Cshark::NegativeAUC< LabelType, OutputType > | Negative area under the curve |
Cshark::NegativeWilcoxonMannWhitneyStatistic< LabelType, OutputType > | Negative Wilcoxon-Mann-Whitney statistic |
►Cshark::AbstractCost< RealVector, RealVector > | |
►Cshark::AbstractLoss< RealVector, RealVector > | |
Cshark::EpsilonHingeLoss | Hinge-loss for large margin regression |
Cshark::HuberLoss | Huber-loss for for robust regression |
Cshark::SquaredEpsilonHingeLoss | Hinge-loss for large margin regression using th squared two-norm |
Cshark::TukeyBiweightLoss | Tukey's Biweight-loss for robust regression |
►Cshark::AbstractCost< Sequence, Sequence > | |
►Cshark::AbstractLoss< Sequence, Sequence > | |
Cshark::SquaredLoss< Sequence, Sequence > | |
►Cshark::AbstractCost< unsigned int, OutputType > | |
►Cshark::AbstractLoss< unsigned int, OutputType > | |
Cshark::SquaredLoss< OutputType, unsigned int > | |
►Cshark::AbstractCost< unsigned int, RealVector > | |
►Cshark::AbstractLoss< unsigned int, RealVector > | |
Cshark::CrossEntropy | Error measure for classication tasks that can be used as the objective function for training |
Cshark::CrossEntropyIndependent | Error measure for classification tasks of non exclusive attributes that can be used for model training |
Cshark::HingeLoss | Hinge-loss for large margin classification |
Cshark::SquaredHingeLoss | Squared Hinge-loss for large margin classification |
Cshark::ZeroOneLoss< unsigned int, RealVector > | 0-1-loss for classification |
►Cshark::AbstractCost< unsigned int, unsigned int > | |
►Cshark::AbstractLoss< unsigned int, unsigned int > | |
Cshark::DiscreteLoss | Flexible loss for classification |
►Cshark::AbstractCost< VectorType, VectorType > | |
►Cshark::AbstractLoss< VectorType, VectorType > | |
Cshark::AbsoluteLoss< VectorType > | Absolute loss |
►Cshark::AbstractMetric< InputType > | |
►Cshark::AbstractKernelFunction< InputType > | |
Cshark::ARDKernelUnconstrained< InputType > | Automatic relevance detection kernel for unconstrained parameter optimization |
Cshark::GaussianRbfKernel< InputType > | Gaussian radial basis function kernel |
Cshark::LinearKernel< InputType > | Linear Kernel, parameter free |
Cshark::ModelKernel< InputType > | Kernel function that uses a Model as transformation function for another kernel |
Cshark::MonomialKernel< InputType > | Monomial kernel. Calculates \( \left\langle x_1, x_2 \right\rangle^m_exponent \) |
Cshark::NormalizedKernel< InputType > | Normalized version of a kernel function |
Cshark::PolynomialKernel< InputType > | Polynomial kernel |
Cshark::ProductKernel< InputType > | Product of kernel functions |
Cshark::ScaledKernel< InputType > | Scaled version of a kernel function |
►Cshark::WeightedSumKernel< InputType > | Weighted sum of kernel functions |
Cshark::MklKernel< InputType > | Weighted sum of kernel functions |
Cshark::SubrangeKernel< InputType > | Weighted sum of kernel functions |
►Cshark::AbstractMetric< std::size_t > | |
►Cshark::AbstractKernelFunction< std::size_t > | |
Cshark::DiscreteKernel | Kernel on a finite, discrete space |
►Cshark::AbstractModel< CARTClassifier< RealVector > ::InputType, CARTClassifier< RealVector > ::OutputType > | |
►Cshark::MeanModel< CARTClassifier< RealVector > > | |
Cshark::RFClassifier | Random Forest Classifier |
►Cshark::AbstractModel< DataType, DataType > | |
Cshark::Normalizer< DataType > | "Diagonal" linear model for data normalization |
►Cshark::AbstractModel< InputT, OutputT > | |
Cshark::ClusteringModel< InputT, OutputT > | Abstract model with associated clustering object |
►Cshark::AbstractModel< InputT, RealVector > | |
►Cshark::ClusteringModel< InputT, RealVector > | |
Cshark::SoftClusteringModel< InputT > | Model for "soft" clustering |
►Cshark::AbstractModel< InputT, unsigned int > | |
►Cshark::ClusteringModel< InputT, unsigned int > | |
Cshark::HardClusteringModel< InputT > | Model for "hard" clustering |
►Cshark::AbstractModel< InputType, OutputType > | |
Cshark::ConcatenatedModel< InputType, OutputType > | ConcatenatedModel concatenates two models such that the output of the first model is input to the second |
Cshark::NBClassifier< InputType, OutputType > | Naive Bayes classifier |
►Cshark::AbstractModel< InputType, RealVector > | |
►Cshark::KernelExpansion< InputType > | Linear model in a kernel feature space |
Cshark::MissingFeaturesKernelExpansion< InputType > | Kernel expansion with missing features support |
Cshark::LinearModel< InputType > | Linear Prediction |
Cshark::NearestNeighborRegression< InputType > | Nearest neighbor regression model |
Cshark::SoftNearestNeighborClassifier< InputType > | SoftNearestNeighborClassifier returns a probabilistic classification by looking at the k nearest neighbors |
►Cshark::AbstractModel< InputType, unsigned int > | |
Cshark::NearestNeighborClassifier< InputType > | Nearest Neighbor Classifier |
Cshark::OneVersusOneClassifier< InputType > | One-versus-one Classifier |
►Cshark::AbstractModel< KernelExpansion< InputType > ::InputType, unsigned int > | |
►Cshark::ArgMaxConverter< KernelExpansion< InputType > > | |
Cshark::KernelClassifier< InputType > | Linear classifier in a kernel feature space |
►Cshark::AbstractModel< LinearModel< VectorType > ::InputType, unsigned int > | |
►Cshark::ArgMaxConverter< LinearModel< VectorType > > | |
Cshark::LinearClassifier< VectorType > | Basic linear classifier |
►Cshark::AbstractModel< Model::InputType, unsigned int > | |
Cshark::ArgMaxConverter< Model > | Conversion of real-valued outputs to classes |
►Cshark::AbstractModel< ModelType::InputType, ModelType::OutputType > | |
Cshark::MeanModel< ModelType > | Calculates the weighted mean of a set of models |
►Cshark::AbstractModel< RealVector, LabelType > | |
Cshark::CARTClassifier< LabelType > | CART Classifier |
►Cshark::AbstractModel< RealVector, RealVector > | |
Cshark::CARTClassifier< RealVector > | |
Cshark::KernelExpansion< RealVector > | |
Cshark::Autoencoder< HiddenNeuron, OutputNeuron > | Implements the autoencoder |
Cshark::CMACMap | Linear combination of piecewise constant functions |
Cshark::ConvexCombination | Models a convex combination of inputs |
Cshark::ConvolutionalRBM< VisibleLayerT, HiddenLayerT, RngT > | Implements a convolutional RBM with a single greyscale input imge and a set of squared image filters |
Cshark::FFNet< HiddenNeuron, OutputNeuron > | Offers the functions to create and to work with a feed-forward network |
Cshark::GaussianNoiseModel | Model which corrupts the data using gaussian noise |
Cshark::ImpulseNoiseModel | Model which corrupts the data using Impulse noise |
Cshark::LinearNorm | Normalizes the (non-negative) input by dividing by the overall sum |
Cshark::OnlineRNNet | A recurrent neural network regression model optimized for online learning |
Cshark::RBFLayer | Implements a layer of radial basis functions in a neural network |
Cshark::RBM< VisibleLayerT, HiddenLayerT, RngT > | Stub for the RBM class. at the moment it is just a holder of the parameter set and the Energy |
►Cshark::SigmoidModel | Standard sigmoid function |
Cshark::SimpleSigmoidModel | Simple sigmoid function |
Cshark::TanhSigmoidModel | Scaled Tanh sigmoid function |
Cshark::Softmax | Softmax function |
Cshark::ThresholdVectorConverter | Convertion of real-vector outputs to vectors of class labels 0 or 1 |
Cshark::TiedAutoencoder< HiddenNeuron, OutputNeuron > | Implements the autoencoder with tied weights |
►Cshark::AbstractModel< RealVector, unsigned int > | |
Cshark::ThresholdConverter | Convertion of real-valued outputs to classes 0 or 1 |
►Cshark::AbstractModel< Sequence, Sequence > | |
Cshark::RNNet | A recurrent neural network regression model that learns with Back Propagation Through Time |
►Cshark::AbstractModel< VectorType, RealVector > | |
Cshark::LinearModel< VectorType > | |
►Cshark::AbstractObjectiveFunction< RealVector, double > | |
Cshark::Cigar | Convex quadratic benchmark function with single dominant axis |
Cshark::ConstrainedSphere | Constrained Sphere function |
Cshark::Ellipsoid | Convex quadratic benchmark function |
Cshark::ErrorFunction | Objective function for supervised learning |
Cshark::GruauPole | Class for balancing two poles on a cart using a fitness function that punishes oscillating, i.e. quickly moving the cart back and forth to balance the poles. Based on code written by Verena Heidrich-Meisner for the paper |
Cshark::KernelBasisDistance | Computes the squared distance between the optimal point in a basis to the point represented by a KernelExpansion |
Cshark::LooErrorCSvm< InputType, CacheType > | Leave-one-out error, specifically optimized for C-SVMs |
Cshark::MergeBudgetMaintenanceStrategy< RealVector >::MergingProblemFunction | |
Cshark::MultiChainApproximator< MarkovChainType > | Approximates the gradient by taking samples from an ensemble of Markov chains running in parallel |
Cshark::NegativeGaussianProcessEvidence< InputType, OutputType, LabelType > | Evidence for model selection of a regularization network/Gaussian process |
Cshark::NegativeLogLikelihood | Computes the negative log likelihood of a dataset under a model |
Cshark::NonMarkovPole | Objective function for single and double non-Markov poles |
Cshark::OneNormRegularizer | One-norm of the input as an objective function |
Cshark::RadiusMarginQuotient< InputType, CacheType > | Radius margin quotions for binary SVMs |
Cshark::SingleChainApproximator< MarkovChainType > | Approximates the gradient by taking samples from a single Markov chain |
Cshark::SparseAutoencoderError | Error Function for Autoencoders and TiedAutoencoders which should be trained with sparse activation of the hidden neurons |
Cshark::Sphere | Convex quadratic benchmark function |
Cshark::SvmLogisticInterpretation< InputType > | Maximum-likelihood model selection score for binary support vector machines |
Cshark::TwoNormRegularizer | Two-norm of the input as an objective function |
►Cshark::AbstractObjectiveFunction< SearchSpaceType, ResultT > | |
Cshark::CombinedObjectiveFunction< SearchSpaceType, ResultT > | Linear combination of objective functions |
►Cshark::AbstractOptimizer< PointType, double, SingleObjectiveResultSet< PointType > > | |
Cshark::AbstractSingleObjectiveOptimizer< PointType > | Base class for all single objective optimizer |
►Cshark::AbstractOptimizer< PointTypeT, RealVector, std::vector< ResultSet< PointTypeT, RealVector > > > | |
Cshark::AbstractMultiObjectiveOptimizer< PointTypeT > | Base class for abstract multi-objective optimizers for arbitrary search spaces |
►Cshark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
►Cshark::AbstractSingleObjectiveOptimizer< RealVector > | |
►Cshark::AbstractLineSearchOptimizer | Basis class for line search methods |
Cshark::BFGS | Broyden, Fletcher, Goldfarb, Shannon algorithm for unconstraint optimization |
Cshark::CG | Conjugate-gradient method for unconstrained optimization |
Cshark::LBFGS | Limited-Memory Broyden, Fletcher, Goldfarb, Shannon algorithm for unconstrained optimization |
Cshark::CMA | Implements the CMA-ES |
Cshark::CMSA | Implements the CMSA |
Cshark::CrossEntropyMethod | Implements the Cross Entropy Method |
Cshark::ElitistCMA | Implements the elitist CMA-ES |
Cshark::GridSearch | Optimize by trying out a grid of configurations |
Cshark::LMCMA | Implements a Limited-Memory-CMA |
Cshark::NestedGridSearch | Nested grid search |
Cshark::PointSearch | Optimize by trying out predefined configurations |
►Cshark::RpropMinus | This class offers methods for the usage of the Resilient-Backpropagation-algorithm without weight-backtracking |
Cshark::IRpropMinus | This class offers methods for the usage of the improved Resilient-Backpropagation-algorithm without weight-backtracking |
►Cshark::RpropPlus | This class offers methods for the usage of the Resilient-Backpropagation-algorithm with weight-backtracking |
Cshark::IRpropPlus | This class offers methods for the usage of the improved Resilient-Backpropagation-algorithm with weight-backtracking |
Cshark::IRpropPlusFull | |
Cshark::SimplexDownhill | Simplex Downhill Method |
Cshark::SteepestDescent | Standard steepest descent |
Cshark::TrustRegionNewton | Simple Trust-Region method based on the full Hessian matrix |
Cshark::VDCMA | |
►Cshark::AbstractOptimizer< RealVector, RealVector, std::vector< ResultSet< RealVector, RealVector > > > | |
►Cshark::AbstractMultiObjectiveOptimizer< RealVector > | |
Cshark::IndicatorBasedMOCMA< Indicator > | Implements the generational MO-CMA-ES |
Cshark::IndicatorBasedRealCodedNSGAII< Indicator > | Implements the NSGA-II |
Cshark::IndicatorBasedSteadyStateMOCMA< Indicator > | Implements the \((\mu+1)\)-MO-CMA-ES |
Cshark::SMSEMOA | Implements the SMS-EMOA |
►Cshark::AbstractTrainer< CARTClassifier< RealVector >, RealVector > | |
Cshark::CARTTrainer | Classification And Regression Trees CART |
►Cshark::AbstractTrainer< CARTClassifier< RealVector >, unsigned int > | |
Cshark::CARTTrainer | Classification And Regression Trees CART |
►Cshark::AbstractTrainer< KernelClassifier< InputType > > | |
Cshark::KernelBudgetedSGDTrainer< InputType, CacheType > | Budgeted stochastic gradient descent training for kernel-based models |
Cshark::KernelSGDTrainer< InputType, CacheType > | Generic stochastic gradient descent training for kernel-based models |
►Cshark::AbstractTrainer< KernelClassifier< InputType >, typename KernelClassifier< InputType > ::OutputType > | |
►Cshark::AbstractWeightedTrainer< KernelClassifier< InputType > > | |
►Cshark::AbstractSvmTrainer< InputType, unsigned int, KernelClassifier< InputType >, AbstractWeightedTrainer< KernelClassifier< InputType > > > | |
Cshark::CSvmTrainer< InputType, CacheType > | Training of C-SVMs for binary classification |
►Cshark::AbstractTrainer< KernelClassifier< InputType >, unsigned int > | |
Cshark::KernelMeanClassifier< InputType > | Kernelized mean-classifier |
Cshark::Perceptron< InputType > | Perceptron online learning algorithm |
Cshark::AbstractTrainer< KernelExpansion< InputType >, RealVector > | |
►Cshark::AbstractTrainer< LinearClassifier< InputType >, unsigned int > | |
►Cshark::AbstractLinearSvmTrainer< InputType > | Super class of all linear SVM trainers |
Cshark::LinearCSvmTrainer< InputType > | |
Cshark::LinearMcSvmADMTrainer< InputType > | |
Cshark::LinearMcSvmATMTrainer< InputType > | |
Cshark::LinearMcSvmATSTrainer< InputType > | |
Cshark::LinearMcSvmCSTrainer< InputType > | |
Cshark::LinearMcSvmLLWTrainer< InputType > | |
Cshark::LinearMcSvmMMRTrainer< InputType > | |
Cshark::LinearMcSvmOVATrainer< InputType > | |
Cshark::LinearMcSvmReinforcedTrainer< InputType > | |
Cshark::LinearMcSvmWWTrainer< InputType > | |
Cshark::SquaredHingeLinearCSvmTrainer< InputType > | |
►Cshark::AbstractTrainer< LinearClassifier<>, unsigned int > | |
►Cshark::AbstractWeightedTrainer< LinearClassifier<>, unsigned int > | |
Cshark::LDA | Linear Discriminant Analysis (LDA) |
►Cshark::AbstractTrainer< LinearModel< InputVectorType > > | |
Cshark::LassoRegression< InputVectorType > | LASSO Regression |
►Cshark::AbstractTrainer< LinearModel<> > | |
Cshark::LinearRegression | Linear Regression |
►Cshark::AbstractTrainer< LinearModel<>, unsigned int > | |
Cshark::FisherLDA | Fisher's Linear Discriminant Analysis for data compression |
Cshark::AbstractTrainer< MissingFeaturesKernelExpansion< InputType >, unsigned int > | |
►Cshark::AbstractTrainer< NBClassifier< InputType, OutputType > > | |
Cshark::NBClassifierTrainer< InputType, OutputType > | Trainer for naive Bayes classifier |
►Cshark::AbstractTrainer< RFClassifier > | |
Cshark::RFTrainer | Random Forest |
►Cshark::AbstractTrainer< RFClassifier, unsigned int > | |
Cshark::RFTrainer | Random Forest |
►Cshark::AbstractTrainer< SigmoidModel, unsigned int > | |
Cshark::SigmoidFitPlatt | Optimizes the parameters of a sigmoid to fit a validation dataset via Platt's method |
Cshark::SigmoidFitRpropNLL | Optimizes the parameters of a sigmoid to fit a validation dataset via backpropagation on the negative log-likelihood |
►Cshark::AbstractUnsupervisedTrainer< KernelExpansion< InputType > > | |
Cshark::OneClassSvmTrainer< InputType, CacheType > | Training of one-class SVMs |
►Cshark::AbstractUnsupervisedTrainer< LinearModel< RealVector > > | |
Cshark::NormalizeComponentsWhitening | Train a linear model to whiten the data |
Cshark::NormalizeComponentsZCA | Train a linear model to whiten the data |
►Cshark::AbstractUnsupervisedTrainer< LinearModel<> > | |
Cshark::PCA | Principal Component Analysis |
►Cshark::AbstractUnsupervisedTrainer< Normalizer< DataType > > | |
Cshark::NormalizeComponentsUnitInterval< DataType > | Train a model to normalize the components of a dataset to fit into the unit inverval |
Cshark::NormalizeComponentsUnitVariance< DataType > | Train a linear model to normalize the components of a dataset to unit variance, and optionally to zero mean |
►Cshark::AbstractUnsupervisedTrainer< ScaledKernel< InputType > > | |
Cshark::NormalizeKernelUnitVariance< InputType > | Determine the scaling factor of a ScaledKernel so that it has unit variance in feature space one on a given dataset |
►Cshark::AbstractClustering< InputT > | Base class for clustering |
Cshark::HierarchicalClustering< InputT > | Clusters defined by a binary space partitioning tree |
►Cshark::AbstractCost< LabelT, OutputT > | Cost function interface |
Cshark::AbstractLoss< LabelT, OutputT > | Loss function interface |
►Cshark::AbstractMetric< InputTypeT > | |
Cshark::AbstractKernelFunction< InputTypeT > | Base class of all Kernel functions |
Cshark::AbstractModel< InputTypeT, OutputTypeT > | Base class for all Models |
►Cshark::AbstractObjectiveFunction< PointType, ResultT > | Super class of all objective functions for optimization and learning |
Cshark::Ackley | Convex quadratic benchmark function with single dominant axis |
Cshark::CigarDiscus | Convex quadratic benchmark function |
Cshark::CIGTAB1 | Multi-objective optimization benchmark function CIGTAB 1 |
Cshark::CIGTAB2 | Multi-objective optimization benchmark function CIGTAB 2 |
Cshark::ContrastiveDivergence< Operator > | Implements k-step Contrastive Divergence described by Hinton et al. (2006) |
Cshark::CrossValidationError< ModelTypeT, LabelTypeT > | Cross-validation error for selection of hyper-parameters |
Cshark::DiffPowers | |
Cshark::Discus | Convex quadratic benchmark function |
Cshark::DTLZ1 | Implements the benchmark function DTLZ1 |
Cshark::DTLZ2 | Implements the benchmark function DTLZ2 |
Cshark::DTLZ3 | Implements the benchmark function DTLZ3 |
Cshark::DTLZ4 | Implements the benchmark function DTLZ4 |
Cshark::DTLZ5 | Implements the benchmark function DTLZ5 |
Cshark::DTLZ6 | Implements the benchmark function DTLZ6 |
Cshark::DTLZ7 | Implements the benchmark function DTLZ7 |
Cshark::ELLI1 | Multi-objective optimization benchmark function ELLI1 |
Cshark::ELLI2 | Multi-objective optimization benchmark function ELLI2 |
Cshark::EvaluationArchive< PointType, ResultT > | Objective function wrapper storing all function evaluations |
Cshark::ExactGradient< RBMType > | |
Cshark::Fonseca | Bi-objective real-valued benchmark function proposed by Fonseca and Flemming |
Cshark::GSP | Real-valued benchmark function with two objectives |
Cshark::Himmelblau | Multi-modal two-dimensional continuous Himmelblau benchmark function |
Cshark::IHR1 | Multi-objective optimization benchmark function IHR1 |
Cshark::IHR2 | Multi-objective optimization benchmark function IHR 2 |
Cshark::IHR3 | Multi-objective optimization benchmark function IHR3 |
Cshark::IHR4 | Multi-objective optimization benchmark function IHR 4 |
Cshark::IHR6 | Multi-objective optimization benchmark function IHR 6 |
Cshark::KernelTargetAlignment< InputType, LabelType > | Kernel Target Alignment - a measure of alignment of a kernel Gram matrix with labels |
Cshark::LooError< ModelTypeT, LabelType > | Leave-one-out error objective function |
Cshark::LZ1 | Multi-objective optimization benchmark function LZ1 |
Cshark::LZ2 | Multi-objective optimization benchmark function LZ2 |
Cshark::LZ3 | Multi-objective optimization benchmark function LZ3 |
Cshark::LZ4 | Multi-objective optimization benchmark function LZ4 |
Cshark::LZ5 | Multi-objective optimization benchmark function LZ5 |
Cshark::LZ6 | Multi-objective optimization benchmark function LZ6 |
Cshark::LZ7 | Multi-objective optimization benchmark function LZ7 |
Cshark::LZ8 | Multi-objective optimization benchmark function LZ8 |
Cshark::LZ9 | |
Cshark::MarkovPole< HiddenNeuron, OutputNeuron > | |
Cshark::NoisyErrorFunction | Error Function which only uses a random fraction of data |
Cshark::Rosenbrock | Generalized Rosenbrock benchmark function |
Cshark::RotatedObjectiveFunction | Rotates an objective function using a randomly initialized rotation |
Cshark::Schwefel | Convex benchmark function |
Cshark::ZDT1 | Multi-objective optimization benchmark function ZDT1 |
Cshark::ZDT2 | Multi-objective optimization benchmark function ZDT2 |
Cshark::ZDT3 | Multi-objective optimization benchmark function ZDT3 |
Cshark::ZDT4 | Multi-objective optimization benchmark function ZDT4 |
Cshark::ZDT6 | Multi-objective optimization benchmark function ZDT6 |
Cshark::AbstractOptimizer< PointType, ResultT, SolutionTypeT > | An optimizer that optimizes general objective functions |
►Cshark::AbstractTrainer< Model, LabelTypeT > | Superclass of supervised learning algorithms |
►Cshark::AbstractSvmTrainer< InputType, RealVector, KernelExpansion< InputType > > | |
Cshark::EpsilonSvmTrainer< InputType, CacheType > | Training of Epsilon-SVMs for regression |
Cshark::RegularizationNetworkTrainer< InputType > | Training of a regularization network |
►Cshark::AbstractSvmTrainer< InputType, unsigned int > | |
Cshark::McReinforcedSvmTrainer< InputType, CacheType > | Training of reinforced-SVM for multi-category classification |
Cshark::McSvmADMTrainer< InputType, CacheType > | Training of ADM-SVMs for multi-category classification |
Cshark::McSvmATMTrainer< InputType, CacheType > | Training of ATM-SVMs for multi-category classification |
Cshark::McSvmATSTrainer< InputType, CacheType > | Training of ATS-SVMs for multi-category classification |
Cshark::McSvmCSTrainer< InputType, CacheType > | Training of the multi-category SVM by Crammer and Singer (CS) |
Cshark::McSvmLLWTrainer< InputType, CacheType > | Training of the multi-category SVM by Lee, Lin and Wahba (LLW) |
Cshark::McSvmMMRTrainer< InputType, CacheType > | Training of the maximum margin regression (MMR) multi-category SVM |
Cshark::McSvmOVATrainer< InputType, CacheType > | Training of a multi-category SVM by the one-versus-all (OVA) method |
Cshark::McSvmWWTrainer< InputType, CacheType > | Training of the multi-category SVM by Weston and Watkins (WW) |
Cshark::SquaredHingeCSvmTrainer< InputType, CacheType > | |
►Cshark::AbstractSvmTrainer< InputType, unsigned int, MissingFeaturesKernelExpansion< InputType > > | |
Cshark::MissingFeatureSvmTrainer< InputType, CacheType > | Trainer for binary SVMs natively supporting missing features |
Cshark::AbstractWeightedTrainer< Model, LabelTypeT > | Superclass of weighted supervised learning algorithms |
Cshark::OptimizationTrainer< Model, LabelTypeT > | Wrapper for training schemes based on (iterative) optimization |
►Cshark::AbstractUnsupervisedTrainer< Model > | Superclass of unsupervised learning algorithms |
Cshark::AbstractWeightedUnsupervisedTrainer< Model > | Superclass of weighted unsupervised learning algorithms |
Cshark::CSvmDerivative< InputType, CacheType > | This class provides two main member functions for computing the derivative of a C-SVM hypothesis w.r.t. its hyperparameters. The constructor takes a pointer to a KernelClassifier and an SvmTrainer, in the assumption that the former was trained by the latter. It heavily accesses their members to calculate the derivative of the alpha and offset values w.r.t. the SVM hyperparameters, that is, the regularization parameter C and the kernel parameters. This is done in the member function prepareCSvmParameterDerivative called by the constructor. After this initial, heavier computation step, modelCSvmParameterDerivative can be called on an input sample to the SVM model, and the method will yield the derivative of the hypothesis w.r.t. the SVM hyperparameters |
Cshark::LabelOrder | This will normalize the labels of a given dataset to 0..N-1 |
Cshark::IndicatorBasedSelection< Indicator > | Implements the well-known indicator-based selection strategy |
Cshark::IndicatorBasedSelection< shark::HypervolumeIndicator > | |
Cshark::Individual< PointType, FitnessTypeT, Chromosome > | Individual is a simple templated class modelling an individual that acts as a candidate solution in an evolutionary algorithm |
►Cshark::Individual< RealVector, double, CMAChromosome > | |
Cshark::CMAIndividual< double > | |
►Cshark::Individual< RealVector, FitnessType, CMAChromosome > | |
Cshark::CMAIndividual< FitnessType > | |
►Cshark::CrossEntropyMethod::INoiseType | Interface class for noise type |
Cshark::CrossEntropyMethod::ConstantNoise | Constant noise term z_t = noise |
Cshark::CrossEntropyMethod::LinearNoise | Linear noise term z_t = a + t / b |
Cshark::InvertedGenerationalDistance | Inverted generational distance for comparing Pareto-front approximations |
►Cshark::IParameterizable | Top level interface for everything that holds parameters |
Cshark::AbstractClustering< RealVector > | |
Cshark::AbstractMetric< InputType > | |
Cshark::AbstractMetric< std::size_t > | |
Cshark::AbstractModel< CARTClassifier< RealVector > ::InputType, CARTClassifier< RealVector > ::OutputType > | |
Cshark::AbstractModel< DataType, DataType > | |
Cshark::AbstractModel< InputT, OutputT > | |
Cshark::AbstractModel< InputT, RealVector > | |
Cshark::AbstractModel< InputT, unsigned int > | |
Cshark::AbstractModel< InputType, OutputType > | |
Cshark::AbstractModel< InputType, RealVector > | |
Cshark::AbstractModel< InputType, unsigned int > | |
Cshark::AbstractModel< KernelExpansion< InputType > ::InputType, unsigned int > | |
Cshark::AbstractModel< LinearModel< VectorType > ::InputType, unsigned int > | |
Cshark::AbstractModel< Model::InputType, unsigned int > | |
Cshark::AbstractModel< ModelType::InputType, ModelType::OutputType > | |
Cshark::AbstractModel< RealVector, LabelType > | |
Cshark::AbstractModel< RealVector, RealVector > | |
Cshark::AbstractModel< RealVector, unsigned int > | |
Cshark::AbstractModel< Sequence, Sequence > | |
Cshark::AbstractModel< VectorType, RealVector > | |
Cshark::AbstractSvmTrainer< InputType, RealVector, KernelExpansion< InputType > > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int, KernelClassifier< InputType >, AbstractWeightedTrainer< KernelClassifier< InputType > > > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int, MissingFeaturesKernelExpansion< InputType > > | |
Cshark::AbstractClustering< InputT > | Base class for clustering |
Cshark::AbstractLinearSvmTrainer< InputType > | Super class of all linear SVM trainers |
Cshark::AbstractMetric< InputTypeT > | |
Cshark::AbstractModel< InputTypeT, OutputTypeT > | Base class for all Models |
Cshark::AbstractSvmTrainer< InputType, LabelType, Model, Trainer > | Super class of all kernelized (non-linear) SVM trainers |
Cshark::BinaryLayer | Layer of binary units taking values in {0,1} |
Cshark::BipolarLayer | Layer of bipolar units taking values in {-1,1} |
Cshark::GaussianLayer | A layer of Gaussian neurons |
Cshark::KernelBudgetedSGDTrainer< InputType, CacheType > | Budgeted stochastic gradient descent training for kernel-based models |
Cshark::KernelSGDTrainer< InputType, CacheType > | Generic stochastic gradient descent training for kernel-based models |
Cshark::LassoRegression< InputVectorType > | LASSO Regression |
Cshark::LDA | Linear Discriminant Analysis (LDA) |
Cshark::LinearRegression | Linear Regression |
Cshark::OneClassSvmTrainer< InputType, CacheType > | Training of one-class SVMs |
Cshark::RFTrainer | Random Forest |
Cshark::TruncatedExponentialLayer | A layer of truncated exponential neurons |
►Cshark::ISerializable | Abstracts serializing functionality |
Cshark::AbstractClustering< RealVector > | |
Cshark::AbstractMetric< InputType > | |
Cshark::AbstractMetric< std::size_t > | |
Cshark::AbstractModel< CARTClassifier< RealVector > ::InputType, CARTClassifier< RealVector > ::OutputType > | |
Cshark::AbstractModel< DataType, DataType > | |
Cshark::AbstractModel< InputT, OutputT > | |
Cshark::AbstractModel< InputT, RealVector > | |
Cshark::AbstractModel< InputT, unsigned int > | |
Cshark::AbstractModel< InputType, OutputType > | |
Cshark::AbstractModel< InputType, RealVector > | |
Cshark::AbstractModel< InputType, unsigned int > | |
Cshark::AbstractModel< KernelExpansion< InputType > ::InputType, unsigned int > | |
Cshark::AbstractModel< LinearModel< VectorType > ::InputType, unsigned int > | |
Cshark::AbstractModel< Model::InputType, unsigned int > | |
Cshark::AbstractModel< ModelType::InputType, ModelType::OutputType > | |
Cshark::AbstractModel< RealVector, LabelType > | |
Cshark::AbstractModel< RealVector, RealVector > | |
Cshark::AbstractModel< RealVector, unsigned int > | |
Cshark::AbstractModel< Sequence, Sequence > | |
Cshark::AbstractModel< VectorType, RealVector > | |
Cshark::AbstractOptimizer< PointType, double, SingleObjectiveResultSet< PointType > > | |
Cshark::AbstractOptimizer< PointTypeT, RealVector, std::vector< ResultSet< PointTypeT, RealVector > > > | |
Cshark::AbstractOptimizer< RealVector, double, SingleObjectiveResultSet< RealVector > > | |
Cshark::AbstractOptimizer< RealVector, RealVector, std::vector< ResultSet< RealVector, RealVector > > > | |
Cshark::AbstractTrainer< CARTClassifier< RealVector >, RealVector > | |
Cshark::AbstractTrainer< CARTClassifier< RealVector >, unsigned int > | |
Cshark::AbstractTrainer< KernelClassifier< InputType > > | |
Cshark::AbstractTrainer< KernelClassifier< InputType >, typename KernelClassifier< InputType > ::OutputType > | |
Cshark::AbstractTrainer< KernelClassifier< InputType >, unsigned int > | |
Cshark::AbstractTrainer< KernelExpansion< InputType >, RealVector > | |
Cshark::AbstractTrainer< LinearClassifier< InputType >, unsigned int > | |
Cshark::AbstractTrainer< LinearClassifier<>, unsigned int > | |
Cshark::AbstractTrainer< LinearModel< InputVectorType > > | |
Cshark::AbstractTrainer< LinearModel<> > | |
Cshark::AbstractTrainer< LinearModel<>, unsigned int > | |
Cshark::AbstractTrainer< MissingFeaturesKernelExpansion< InputType >, unsigned int > | |
Cshark::AbstractTrainer< NBClassifier< InputType, OutputType > > | |
Cshark::AbstractTrainer< RFClassifier > | |
Cshark::AbstractTrainer< RFClassifier, unsigned int > | |
Cshark::AbstractTrainer< SigmoidModel, unsigned int > | |
Cshark::AbstractUnsupervisedTrainer< KernelExpansion< InputType > > | |
Cshark::AbstractUnsupervisedTrainer< LinearModel< RealVector > > | |
Cshark::AbstractUnsupervisedTrainer< LinearModel<> > | |
Cshark::AbstractUnsupervisedTrainer< Normalizer< DataType > > | |
Cshark::AbstractUnsupervisedTrainer< ScaledKernel< InputType > > | |
►Cshark::detail::BaseWeightedDataset< LabeledData< InputT, LabelT > > | |
Cshark::WeightedLabeledData< InputT, LabelT > | Weighted data set for supervised learning |
►Cshark::detail::BaseWeightedDataset< UnlabeledData< DataT > > | |
Cshark::WeightedUnlabeledData< DataT > | Weighted data set for unsupervised learning |
►Cshark::Data< InputT > | |
Cshark::UnlabeledData< InputT > | Data set for unsupervised learning |
►Cshark::Data< InputType > | |
Cshark::UnlabeledData< InputType > | |
Cshark::Data< LabelT > | |
Cshark::Data< LabelType > | |
►Cshark::Data< RealVector > | |
Cshark::UnlabeledData< RealVector > | |
Cshark::Data< unsigned int > | |
Cshark::LabeledData< InputType, LabelType > | |
Cshark::LabeledData< InputType, unsigned int > | |
Cshark::AbstractClustering< InputT > | Base class for clustering |
Cshark::AbstractMetric< InputTypeT > | |
Cshark::AbstractModel< InputTypeT, OutputTypeT > | Base class for all Models |
Cshark::AbstractOptimizer< PointType, ResultT, SolutionTypeT > | An optimizer that optimizes general objective functions |
Cshark::AbstractTrainer< Model, LabelTypeT > | Superclass of supervised learning algorithms |
Cshark::AbstractUnsupervisedTrainer< Model > | Superclass of unsupervised learning algorithms |
Cshark::BinaryLayer | Layer of binary units taking values in {0,1} |
Cshark::BipolarLayer | Layer of bipolar units taking values in {-1,1} |
Cshark::CSvmDerivative< InputType, CacheType > | This class provides two main member functions for computing the derivative of a C-SVM hypothesis w.r.t. its hyperparameters. The constructor takes a pointer to a KernelClassifier and an SvmTrainer, in the assumption that the former was trained by the latter. It heavily accesses their members to calculate the derivative of the alpha and offset values w.r.t. the SVM hyperparameters, that is, the regularization parameter C and the kernel parameters. This is done in the member function prepareCSvmParameterDerivative called by the constructor. After this initial, heavier computation step, modelCSvmParameterDerivative can be called on an input sample to the SVM model, and the method will yield the derivative of the hypothesis w.r.t. the SVM hyperparameters |
Cshark::Data< Type > | Data container |
Cshark::GaussianLayer | A layer of Gaussian neurons |
Cshark::LabeledData< InputT, LabelT > | Data set for supervised learning |
Cshark::LineSearch | Wrapper for the linesearch class of functions in the linear algebra library |
Cshark::MultiTaskSample< InputTypeT > | Aggregation of input data and task index |
Cshark::RecurrentStructure | Offers a basic structure for recurrent networks |
Cshark::TruncatedExponentialLayer | A layer of truncated exponential neurons |
Cshark::TypedFlags< Flag > | Flexible and extensible mechanisms for holding flags |
Cshark::TypedFlags< Feature > | |
Cshark::IterativeNNQuery< DataContainer > | Iterative nearest neighbors query |
►Citerator_range | |
Cshark::KeyValueRange< Iterator1, Iterator2 > | |
Cshark::JaakkolaHeuristic | Jaakkola's heuristic and related quantities for Gaussian kernel selection |
Cshark::KernelMatrix< InputType, CacheType > | Kernel Gram matrix |
Cshark::LabeledDataDistribution< InputType, LabelType > | A LabeledDataDistribution defines a supervised learning problem |
Cshark::LabeledDataDistribution< InputType, unsigned int > | |
►Cshark::LabeledDataDistribution< RealVector, RealVector > | |
Cshark::Wave | Noisy sinc function: y = sin(x) / x + noise |
►Cshark::LabeledDataDistribution< RealVector, unsigned int > | |
Cshark::Chessboard | "chess board" problem for binary classification |
Cshark::CircleInSquare | |
Cshark::DiagonalWithCircle | |
Cshark::PamiToy | |
Cshark::LeastContributorApproximator< Rng, ExactHypervolume > | Approximately determines the point of a set contributing the least hypervolume |
Cshark::LibSVMSelectionCriterion | |
Cshark::LinearRankingSelection< Extractor > | Implements a fitness-proportional selection scheme for mating selection that scales the fitness values linearly before carrying out the actual selection |
Cshark::LRUCache< T > | Implements an LRU-Caching Strategy for arbitrary Cache-Lines |
Cshark::LRUCache< QpFloatType > | |
Cshark::MarkovChain< Operator > | A single Markov chain |
►Cshark::blas::matrix_expression< E > | Base class for Matrix Expression models |
►Cshark::blas::matrix_container< compressed_matrix< T, I > > | |
Cshark::blas::compressed_matrix< T, I > | |
►Cshark::blas::matrix_container< diagonal_matrix< scalar_vector< T > > > | |
►Cshark::blas::diagonal_matrix< scalar_vector< T > > | |
Cshark::blas::identity_matrix< T > | An identity matrix with values of type T |
►Cshark::blas::matrix_container< diagonal_matrix< VectorType > > | |
Cshark::blas::diagonal_matrix< VectorType > | An diagonal matrix with values stored inside a diagonal vector |
►Cshark::blas::matrix_container< matrix< double, blas::column_major > > | |
Cshark::blas::matrix< double, blas::column_major > | |
►Cshark::blas::matrix_container< matrix< QpFloatType, row_major > > | |
Cshark::blas::matrix< QpFloatType > | |
►Cshark::blas::matrix_container< matrix< T, L > > | |
Cshark::blas::matrix< T, L > | A dense matrix of values of type T |
►Cshark::blas::matrix_container< scalar_matrix< T > > | |
Cshark::blas::scalar_matrix< T > | A matrix with all values of type T equal to the same value |
►Cshark::blas::matrix_container< triangular_matrix< T, Orientation, TriangularType > > | |
Cshark::blas::triangular_matrix< T, Orientation, TriangularType > | |
►Cshark::blas::matrix_expression< C > | |
Cshark::blas::matrix_container< C > | Base class for Matrix container models |
►Cshark::blas::matrix_expression< dense_matrix_adaptor< T, Orientation > > | |
Cshark::blas::dense_matrix_adaptor< T, Orientation > | |
Cshark::blas::matrix_expression< internal_transpose_proxy< M > > | |
►Cshark::blas::matrix_expression< matrix_addition< E1, E2 > > | |
Cshark::blas::matrix_addition< E1, E2 > | |
►Cshark::blas::matrix_expression< matrix_binary< E1, E2, F > > | |
Cshark::blas::matrix_binary< E1, E2, F > | |
►Cshark::blas::matrix_expression< matrix_matrix_prod< MatA, MatB > > | |
Cshark::blas::matrix_matrix_prod< MatA, MatB > | |
►Cshark::blas::matrix_expression< matrix_range< M > > | |
Cshark::blas::matrix_range< M > | |
►Cshark::blas::matrix_expression< matrix_range< Matrix > > | |
Cshark::blas::matrix_range< Matrix > | |
►Cshark::blas::matrix_expression< matrix_reference< M > > | |
Cshark::blas::matrix_reference< M > | Wraps another expression as a reference |
►Cshark::blas::matrix_expression< matrix_scalar_multiply< E > > | |
Cshark::blas::matrix_scalar_multiply< E > | |
►Cshark::blas::matrix_expression< matrix_transpose< M > > | |
Cshark::blas::matrix_transpose< M > | Matrix transpose |
►Cshark::blas::matrix_expression< matrix_unary< E, F > > | |
Cshark::blas::matrix_unary< E, F > | Class which allows for matrix transformations |
►Cshark::blas::matrix_expression< outer_product< E1, E2 > > | |
Cshark::blas::outer_product< E1, E2 > | |
►Cshark::blas::matrix_expression< vector_repeater< V > > | |
Cshark::blas::vector_repeater< V > | |
Cshark::blas::matrix_set_expression< E > | Base class for expressions of matrix sets |
►Cshark::blas::matrix_set_expression< matrix_set< element_type > > | |
Cshark::blas::matrix_set< element_type > | |
►Cshark::blas::matrix_set_expression< matrix_set< RealMatrix > > | |
Cshark::blas::matrix_set< RealMatrix > | |
Cshark::MaximumGainCriterion | Working set selection by maximization of the dual objective gain |
Cshark::MaximumGradientCriterion | Working set selection by maximization of the projected gradient |
Cshark::McPegasos< VectorType > | Pegasos solver for linear multi-class support vector machines |
►CMklKernelBase | |
CMultiTaskKernel< InputTypeT > | Special kernel function for multi-task and transfer learning |
Cshark::MklKernel< InputType > | Weighted sum of kernel functions |
Cshark::MNIST | Reads in the famous MNIST data in possibly binarized form. The MNIST database itself is not included in Shark, this class just helps loading it |
Cshark::ModifiedKernelMatrix< InputType, CacheType > | Modified Kernel Gram matrix |
Cshark::MultiNomialDistribution | Implements a multinomial distribution |
Cshark::MultiplicativeEpsilonIndicator | Given a reference front R and an approximation F, calculates the multiplicative approximation quality of F |
Cshark::Multiply | Transformation function multiplying the elements in a dataset by a scalar or component-wise by values stores in a vector |
Cshark::MultiVariateNormalDistribution | Implements a multi-variate normal distribution with zero mean |
Cshark::MultiVariateNormalDistributionCholesky | Multivariate normal distribution with zero mean using a cholesky decomposition |
Cshark::MVPSelectionCriterion | |
►Cshark::detail::NeuronBase< DropoutNeuron< Neuron > > | |
Cshark::DropoutNeuron< Neuron > | Wraps a given neuron type and implements dropout for it |
►Cshark::detail::NeuronBase< FastSigmoidNeuron > | |
Cshark::FastSigmoidNeuron | Fast sigmoidal function, which does not need to compute an exponential function |
►Cshark::detail::NeuronBase< LinearNeuron > | |
Cshark::LinearNeuron | Linear activation Neuron |
►Cshark::detail::NeuronBase< LogisticNeuron > | |
Cshark::LogisticNeuron | Neuron which computes the Logistic (logistic) function with range [0,1] |
►Cshark::detail::NeuronBase< RectifierNeuron > | |
Cshark::RectifierNeuron | Rectifier Neuron f(x) = max(0,x) |
►Cshark::detail::NeuronBase< TanhNeuron > | |
Cshark::TanhNeuron | Neuron which computes the hyperbolic tangenst with range [-1,1] |
Cshark::blas::noalias_proxy< C > | |
►Cnoncopyable | |
Cshark::NBClassifier< InputType, OutputType > | Naive Bayes classifier |
Cshark::ScopedHandle< T > | |
Cshark::NormalTrainer | Trainer for normal distribution |
Cshark::OnePointCrossover | Implements one-point crossover |
Cshark::PairRangeType< PairType, Range1, Range2 > | |
Cshark::PairReference< Pair, Iterator1, Iterator2 > | Given a type of pair and two iterators to zip together, returns the reference |
Cshark::ParetoDominanceComparator< Extractor > | Implementation of the Pareto-Dominance relation under the assumption of all objectives to be minimized |
►Cpartially_ordered | |
Cshark::KeyValuePair< Key, Value > | Represents a Key-Value-Pair similar std::pair which is strictly ordered by it's key |
Cshark::PartlyPrecomputedMatrix< Matrix > | Partly Precomputed version of a matrix for quadratic programming |
Cshark::Pegasos< VectorType > | Pegasos solver for linear (binary) support vector machines |
Cshark::PenalizingEvaluator | Penalizing evaluator for scalar objective functions |
Cshark::LeastContributorApproximator< Rng, ExactHypervolume >::Point< VectorType > | Models a point and associated information for book-keeping purposes |
Cshark::EvaluationArchive< PointType, ResultT >::PointResultPairType | Pair of point and result |
Cshark::PolynomialMutator | Polynomial mutation operator |
Cshark::PopulationBasedStepSizeAdaptation | Step size adaptation based on the success of the new population compared to the old |
Cshark::PrecomputedMatrix< Matrix > | Precomputed version of a matrix for quadratic programming |
Cshark::QpMcSimplexDecomp< Matrix >::PreferedSelectionStrategy | Working set selection eturning th S2DO working set |
Cshark::QpMcBoxDecomp< Matrix >::PreferedSelectionStrategy | Working set selection eturning th S2DO working set |
►CProductKernel | |
CMultiTaskKernel< InputTypeT > | Special kernel function for multi-task and transfer learning |
Cshark::QpBoxLinear< InputT > | Quadratic program solver for box-constrained problems with linear kernel |
Cshark::QpBoxLinear< CompressedRealVector > | |
►Cshark::QpConfig | Super class of all support vector machine trainers |
Cshark::AbstractSvmTrainer< InputType, RealVector, KernelExpansion< InputType > > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int, KernelClassifier< InputType >, AbstractWeightedTrainer< KernelClassifier< InputType > > > | |
Cshark::AbstractSvmTrainer< InputType, unsigned int, MissingFeaturesKernelExpansion< InputType > > | |
Cshark::AbstractLinearSvmTrainer< InputType > | Super class of all linear SVM trainers |
Cshark::AbstractSvmTrainer< InputType, LabelType, Model, Trainer > | Super class of all kernelized (non-linear) SVM trainers |
Cshark::OneClassSvmTrainer< InputType, CacheType > | Training of one-class SVMs |
Cshark::QpMcBoxDecomp< Matrix > | |
Cshark::QpMcDecomp< Matrix > | Quadratic program solver for multi class SVM problems |
►Cshark::QpMcLinear< InputT > | Generic solver skeleton for linear multi-class SVM problems |
Cshark::QpMcLinearADM< InputT > | Solver for the multi-class SVM with absolute margin and discriminative maximum loss |
Cshark::QpMcLinearATM< InputT > | Solver for the multi-class SVM with absolute margin and total maximum loss |
Cshark::QpMcLinearATS< InputT > | Solver for the multi-class SVM with absolute margin and total sum loss |
Cshark::QpMcLinearCS< InputT > | Solver for the multi-class SVM by Crammer & Singer |
Cshark::QpMcLinearLLW< InputT > | Solver for the multi-class SVM by Lee, Lin & Wahba |
Cshark::QpMcLinearMMR< InputT > | Solver for the multi-class maximum margin regression SVM |
Cshark::QpMcLinearReinforced< InputT > | Solver for the "reinforced" multi-class SVM |
Cshark::QpMcLinearWW< InputT > | Solver for the multi-class SVM by Weston & Watkins |
Cshark::QpMcSimplexDecomp< Matrix > | |
Cshark::QpSolutionProperties | Properties of the solution of a quadratic program |
Cshark::QpSolver< Problem, SelectionStrategy > | Quadratic program solver |
Cshark::QpSparseArray< QpFloatType > | Specialized container class for multi-class SVM problems |
Cshark::QpStoppingCondition | Stopping conditions for quadratic programming |
►Crandom_access_iterator_base | |
Cshark::blas::triangular_matrix< T, Orientation, TriangularType >::major1_iterator< TIter > | |
Cshark::blas::triangular_matrix< T, Orientation, TriangularType >::major2_iterator< TIter > | |
Cshark::Individual< PointType, FitnessTypeT, Chromosome >::RankOrdering | Ordering relation by the ranks of the individuals |
Cshark::RealSpace | The RealSpace can't be enumerated. Infinite values are just too much |
Cshark::tags::RealSpace | A Tag for EnumerationSpaces. It tells the Functions, that the space is real and can't be enumerated |
Cshark::blas::compressed_vector< T, I >::reference | |
Cshark::blas::compressed_matrix< T, I >::reference | |
Cshark::RegularizedKernelMatrix< InputType, CacheType > | Kernel Gram matrix with modified diagonal |
Cshark::WilcoxonRankSumTest::Result | Stores result of Wilcoxon rank-sum test |
Cshark::RadiusMarginQuotient< InputType, CacheType >::Result | |
Cshark::ResultSet< SearchPointT, ResultT > | |
►Cshark::ResultSet< SearchPointTypeT, double > | |
►Cshark::SingleObjectiveResultSet< SearchPointTypeT > | Result set for single objective algorithm |
Cshark::ValidatedSingleObjectiveResultSet< SearchPointTypeT > | Result set for validated points |
Cshark::statistics::ResultTable< Parameter > | Stores results of a running experiment |
Cshark::RFTrainer::RFAttribute | |
Cshark::ROC | ROC-Curve - false negatives over false positives |
Cshark::RouletteWheelSelection | Fitness-proportional selection operator |
Cshark::QpSparseArray< QpFloatType >::Row | Data structure describing a row of the sparse array |
Cshark::Sampler< Rng > | Samples a random point |
Cshark::AbstractObjectiveFunction< PointType, ResultT >::SecondOrderDerivative | |
►CSHARK_ITERATOR_FACADE | |
Cshark::IndexedIterator< Iterator > | Creates an Indexed Iterator, an Iterator which also carries index information using index() |
Cshark::PairIterator< Value, Iterator1, Iterator2 > | A Pair-Iterator which gives a unified view of two ranges |
Cshark::ProxyIterator< Sequence, ValueType, Reference > | Creates an iterator which reinterpretes an object as a range |
Cshark::Shift | Transformation function adding a vector or a scalar to the elements in a dataset |
Cshark::Shifter | Shifter problem |
Cshark::SimulatedBinaryCrossover< PointType > | Simulated binary crossover operator |
Cshark::SimulatedBinaryCrossover< RealVector > | |
Cshark::SinglePole | |
Cshark::blas::SolveAXB | Flag indicating that a system AX=B is to be solved |
Cshark::blas::SolveXAB | Flag indicating that a system XA=B is to be solved |
Cshark::QpBoxLinear< CompressedRealVector >::SparseVector | Data structure for sparse vectors |
Cshark::CARTClassifier< LabelType >::SplitInfo | |
►Cshark::State | Represents the State of an Object |
Cshark::EmptyState | Default State of an Object which does not need a State |
Cshark::statistics::Statistics< Parameter > | Generates Statistics over the results of an experiment |
►Cshark::detail::SubrangeKernelBase< InputType > | |
Cshark::SubrangeKernel< InputType > | Weighted sum of kernel functions |
►Cshark::SvmProblem< Problem > | |
Cshark::SvmShrinkingProblem< Problem > | |
Cshark::CARTTrainer::TableEntry | Types frequently used |
Cshark::WeightedSumKernel< InputType >::tBase | Structure describing a single m_base kernel |
Cshark::TemperedMarkovChain< Operator > | |
Cshark::QpMcDecomp< Matrix >::tExample | Data structure describing one training example |
Cshark::Timer | Timer abstraction with microsecond resolution |
Cshark::TournamentSelection< Predicate > | Tournament selection operator |
Cboost::serialization::tracking_level< shark::TypedFlags< T > > | |
Cboost::serialization::tracking_level< std::vector< T > > | |
Cshark::TransformedData< Functor, T > | |
Cshark::TreeConstruction | Stopping criteria for tree construction |
Cshark::Truncate | Transformation function truncating elements in a dataset |
Cshark::TruncateAndRescale | Transformation function first truncating and then rescaling elements in a dataset |
Cshark::TruncatedExponential_distribution< RealType > | Boost random suitable distribution for an truncated exponential. See TruncatedExponential for more details |
Cshark::QpMcDecomp< Matrix >::tVariable | Data structure describing one variable of the problem |
Cshark::TwoPointStepSizeAdaptation | Step size adaptation based on the success of the new population compared to the old |
Cshark::TwoStateSpace< State1, State2 > | The TwoStateSpace is a discrete Space with only two values, for example {0,1} or {-1,1} |
►Ctype | |
Cshark::Batch< detail::MarkovChainSample< HiddenSample, VisibleSample > > | |
Cshark::Batch< InputVectorType > | |
Cshark::Batch< T > | Class which helps using different batch types |
Cshark::UniformCrossover | Uniform crossover of arbitrary individuals |
Cshark::UniformRankingSelection | Selects individuals from the range of individual and offspring individuals |
Cshark::QpMcBoxDecomp< Matrix >::Variable | Data structure describing one m_variables of the problem |
Cshark::QpMcSimplexDecomp< Matrix >::Variable | Data structure describing one variable of the problem |
►Cvariate_generator | |
Cshark::Bernoulli< RngType > | This class simulates a "Bernoulli trial", which is like a coin toss |
Cshark::Binomial< RngType > | Models a binomial distribution with parameters p and n |
Cshark::Cauchy< RngType > | Cauchy distribution |
Cshark::DiffGeometric< RngType > | Random variable with diff geometric distribution |
Cshark::Dirichlet< RngType > | Implements a Dirichlet distribution |
Cshark::DiscreteUniform< RngType > | Implements the discrete uniform distribution |
Cshark::Erlang< RngType > | Erlang distributed random variable |
Cshark::Gamma< RngType > | Gamma distributed random variable |
Cshark::Geometric< RngType > | Implements the geometric distribution |
Cshark::HyperGeometric< RngType > | Random variable with a hypergeometric distribution |
Cshark::LogNormal< RngType > | Implements a log-normal distribution with parameters location m and Scale s |
Cshark::NegExponential< RngType > | Implements the Negative exponential distribution |
Cshark::Normal< RngType > | Implements a univariate normal (Gaussian) distribution |
Cshark::Poisson< RngType > | Implements a Poisson distribution with parameter mean |
Cshark::TruncatedExponential< RngType > | Implements a generator for the truncated exponential function |
Cshark::Uniform< RngType > | Implements a continuous uniform distribution |
Cshark::Weibull< RngType > | Weibull distributed random variable |
►Cshark::blas::vector_expression< E > | Base class for Vector Expression models |
►Cshark::blas::vector_container< compressed_vector< T, I > > | |
Cshark::blas::compressed_vector< T, I > | Compressed array based sparse vector |
►Cshark::blas::vector_container< vector< std::size_t > > | |
►Cshark::blas::vector< std::size_t > | |
Cshark::blas::permutation_matrix | |
►Cshark::blas::vector_container< vector< T > > | |
Cshark::blas::vector< T > | A dense vector of values of type T |
►Cshark::blas::vector_expression< C > | |
Cshark::blas::vector_container< C > | Base class for Vector container models |
►Cshark::blas::vector_expression< dense_vector_adaptor< T > > | |
Cshark::blas::dense_vector_adaptor< T > | Represents a given chunk of memory as a dense vector of elements of type T |
►Cshark::blas::vector_expression< matrix_column< M > > | |
Cshark::blas::matrix_column< M > | |
►Cshark::blas::vector_expression< matrix_row< M > > | |
Cshark::blas::matrix_row< M > | |
►Cshark::blas::vector_expression< matrix_row< Matrix > > | |
►Cshark::blas::matrix_row< Matrix > | |
Cshark::blas::temporary_proxy< blas::matrix_row< Matrix > > | |
►Cshark::blas::vector_expression< matrix_vector_prod< MatA, VecV > > | |
Cshark::blas::matrix_vector_prod< MatA, VecV > | |
►Cshark::blas::vector_expression< matrix_vector_range< M > > | |
Cshark::blas::matrix_vector_range< M > | |
►Cshark::blas::vector_expression< scalar_vector< T > > | |
Cshark::blas::scalar_vector< T > | Vector expression representing a constant valued vector |
►Cshark::blas::vector_expression< sparse_vector_adaptor< T, I > > | |
Cshark::blas::sparse_vector_adaptor< T, I > | |
►Cshark::blas::vector_expression< vector_addition< E1, E2 > > | |
Cshark::blas::vector_addition< E1, E2 > | |
►Cshark::blas::vector_expression< vector_binary< E1, E2, F > > | |
Cshark::blas::vector_binary< E1, E2, F > | |
►Cshark::blas::vector_expression< vector_range< V > > | |
Cshark::blas::vector_range< V > | A vector referencing a continuous subvector of elements of vector v containing all elements specified by range |
►Cshark::blas::vector_expression< vector_reference< V > > | |
Cshark::blas::vector_reference< V > | |
►Cshark::blas::vector_expression< vector_scalar_multiply< E > > | |
Cshark::blas::vector_scalar_multiply< E > | Implements multiplications of a vector by a scalar |
►Cshark::blas::vector_expression< vector_unary< E, F > > | |
Cshark::blas::vector_unary< E, F > | Class implementing vector transformation expressions |
Cshark::blas::vector_set_expression< E > | Base class for expressions of vector sets |
Cshark::VectorMatrixTraits< VectorType > | Template which finds for every Vector type the best fitting Matrix |
Cshark::BoundingBoxComputer< Set >::VolumeComparator | Compares points based on their contributed volume |
Cshark::Weibull_distribution< RealType > | Weibull distribution |
Cshark::WilcoxonRankSumTest | Wilcoxon rank-sum test / Mann–Whitney U test |
Cshark::WS2MaximumGradientCriterion | Working set selection by maximization of the projected gradient |
►CP | |
Cshark::blas::temporary_proxy< P > | |
►CTrainer | |
Cshark::AbstractSvmTrainer< InputType, LabelType, Model, Trainer > | Super class of all kernelized (non-linear) SVM trainers |