JS技术

正式使用opencv里的训练和检测 - opencv_createsamples、opencv_traincascade(6)

字号+ 作者:H5之家 来源:H5之家 2015-12-15 08:44 我要评论( )

The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detectio

The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the integral description).

To see the object detector at work, have a look at the facedetect demo: https://github.com/Itseez/opencv/tree/master/samples/cpp/dbt_face_detection.cpp

The following reference is for the detection part only. There is a separate application called opencv_traincascade that can train a cascade of boosted classifiers from a set of samples.

Note

In the new C++ interface it is also possible to use LBP (local binary pattern) features in addition to Haar-like features. .. [Viola01] Paul Viola and Michael J. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR, 2001. The paper is available online at https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf(上述有提到)


7、opencv关于boost

(boost.cpp——opencv3.0)

Boosting

A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship  between the input  and the output  . Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.

Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many "weak" classifiers to produce a powerful committee [125] . A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most "monolithic" strong classifiers such as SVMs and Neural Networks.

Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called stumps ) are sufficient.

The boosted model is based on  training examples  is a  -component vector. Each component encodes a feature relevant to the learning task at hand. The desired two-class output is encoded as -1 and +1.

Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost [49] . All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm, outlined below. Initially the same weight is assigned to each sample (step 2). Then, a weak classifier  is trained on the weighted training data (step 3a). Its weighted training error and scaling factor  is computed (step 3b). The weights are increased for training samples that have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another  -1 times. The final classifier  is the sign of the weighted sum over the individual weak classifiers (step 4).

Two-class Discrete AdaBoost Algorithm

  • Set  :
  • Fit the classifier , using weights  on the training data.
  • Compute  and renormalize so that  .
  • Classify new samples x using the formula:  .
  • NoteSimilar to the classical boosting methods, the current implementation supports two-class classifiers only. For M > 2 classes, there is theAdaBoost.MH algorithm (described in ) that reduces the problem to the two-class problem, yet with a much larger training set.

     

    1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

    相关文章
    • AngularJS使用HTML5摄像头拍照

      AngularJS使用HTML5摄像头拍照

      2016-02-23 09:42

    • 从Container内存监控限制到CPU使用率限制方案 - 走在前往架构师的路上 - 博客频道 - CSDN.NET 走

      从Container内存监控限制到CPU使用率限制方案 - 走在前往架构师的路

      2015-12-15 09:09

    • 数据抽取工具Kettle使用 - 唐僧打怪兽 - 博客频道 - CSDN.NET 唐僧打怪兽 热爱互联网,编程,比如:J

      数据抽取工具Kettle使用 - 唐僧打怪兽 - 博客频道 - CSDN.NET 唐僧打

      2015-12-14 15:37

    • Oracle数据库Decode()函数的使用方法 - 周泽辉的CSDN博客... - 博客频道 - CSDN.NET 周

      Oracle数据库Decode()函数的使用方法 - 周泽辉的CSDN博客... - 博客

      2015-12-14 15:05

    网友点评
    -