5/16/2023 0 Comments Sound normalizer full![]() ![]() We discuss the effect of the previously proposed normalizations within the context of an agent-combiner in Section 4.4 and compare them in the experiments section.Ī challenge in multi-target classification systems is that the distribution among target-types is often not known a-priori when the system is initially trained. In contrast, we assume that a test sample could belong to any one of the pre-defined target-types or clutter, requiring evaluation and combination of multiple agent scores. The Z-norm, F-norm, equal error rate (EER) norm, and model-specific log-likelihood ratio (MS-LLR) norm have all been investigated in such systems. Score normalization allowed the use of a single threshold for any claimed identity. The problem studied in these previous works differs from that studied here in that they assumed that at test time a user would claim an identity, and the system would only evaluate the classifier associated with that user. Biometrics researchers have studied classifier normalization extensively. ![]() This difference makes a standard classifier selector inappropriate for agent combination.Ī critical element of the any-combiner ( 1) and the new combiner that we propose in this paper is agent normalization. However, whereas experts are trained for a specific region of the input space, agents are tuned for a particular target-type. The motivation for our agent-classifiers is similar to that for classifier experts in each case, the constituent classifiers are specialized with respect to a particular subproblem. Mixtures of experts have found wide use in a variety of applications. The expert classifiers are trained to perform well in a region of the input space, and thus a selection combiner chooses from one of the experts to classify each test sample based on the location of the test feature vector. Classifier selectors are often based on the mixture of experts originally proposed by Jacobs et al. Of these two categories, classifier selection ensembles are the most closely related to the proposed ensemble of classifier agents. Kuncheva breaks the literature down into two broad categories: classifier selection and classifier fusion. There is a wide body of literature on classifier combiners. ![]() Finally, it enables the classifier to provide an indication of the most likely target-types, given an alert. Third, it provides alignment between the algorithmic structure and a human operator’s understanding of the target class breakdown, enhancing the ability of an operator to interact with the classifier by tuning it to reflect changes or drift in the target population. Second, it simplifies the process of making in-situ adjustments to the relative importance of individual target-types (including adding or deleting targets), an attribute that is often desirable in detection and classification applications. First, it offers the potential to reduce the necessary complexity for any individual agent as compared to a single, multi-target classifier. Our combination of agents approach provides several benefits over using a single classifier. Perhaps the most straightforward approach to this problem is to train a single binary classifier that combines all target-types as a single target class. We address the multi-target binary classification problem by employing an ensemble of classification agents, each trained to classify between one of the pre-defined target-types and clutter. We show experimentally that the proposed combiner gives excellent performance on the multi-target binary classification problems of pin-less verification of human faces and vehicle classification using acoustic signatures. We compare this combiner to the common strategy of selecting the maximum of the normalized agent-scores as the combiner score. We show that this combination strategy is optimal under a conditionally non-discriminative assumption. We propose a combination strategy that sums weighted likelihood ratios of the individual agent-classifiers, where the likelihood ratio is between the target-type for the agent vs. The agent ensemble approach offers several benefits for multi-target classification including straightforward in-situ tuning of the ensemble to drift in the target population and the ability to give an indication to a human operator of which target-type causes an alert. The agent-classifiers that make up the ensemble are binary classifiers trained to classify between one of the target-types vs. The system goal is to maximize the probability of alerting on targets from any type while excluding background clutter. We propose an ensemble approach for multi-target binary classification, where the target class breaks down into a disparate set of pre-defined target-types.
0 Comments
Leave a Reply. |