javacv


(看到有很多同學都來看這篇文章,說明可能是有必要的,然后這個寫的比較水,所以 如果求干貨的話,請移步:

http://www.cnblogs.com/letben/p/5885799.html

但是 要在這個里面下載那個169Mb的jar包。

 

也不能算是突發奇想,但是,eclipse 的確可以跑 opencv的類庫。下面是跑動的一些背景:

https://github.com/bytedeco/javacv#manual-installation

javacv 

介紹:

javacv使用來自javacpp的預置封裝和提供工具包來讓他們的功能更容易的在java平台以及android上使用。(這些javacpp的預置是研究人員在計算機視覺(包括:OpenCVFFmpeglibdc1394PGR FlyCaptureOpenKinectvideoInputARToolKitPlus, and flandmark)上廣泛應用的類庫)

 

javacv包含硬件加速全屏圖像顯示(CanvasFrame and GLCanvasFrame),更容易的方式來執行代碼在多核並行的情況下,用戶友好的幾何和顏色校准在相機和投影儀情況下(GeometricCalibratorProCamGeometricCalibratorProCamColorCalibrator),探測和匹配特征點(ObjectFinder),一系列實現圖像和投影相機對齊系統的設置(主要有:mainlyGNImageAlignerProjectiveTransformerProjectiveColorTransformerProCamTransformer, andReflectanceInitializer),一個斑點分析包(Blobs),還有許多混雜的功能在javacv類中。這些類中還包含OpenCL和 OpenGL對應的部分,他們的名字以CL結束或者以GL開頭:舉個例子:javaCVCL ,GLCanvasFrame,等等。

為了學習如何使用API,既然當前缺少文檔,請參考一些 Sample Usage部分或者sample programs,包括兩個對android的(FacePreview.java 和 RecordActivity.java),可以在樣例目錄中找到。你會發現參考一些源碼也是有意義的比如 ProCamCalib and ProCamTracker as well as examples ported from OpenCV2 Cookbook and the associated wiki pages.

 

請告知我你做的在代碼上的更新或者修改以便於我可以整合他們到下一個版本。謝謝。盡管問關於 the mailing list的問題,如果你遇到了在軟件方面的問題。我知道還有許多需要改進的地方…

下載:

為了手動的安裝jar文件,獲得持續更新的文檔和說明,需要手動安裝下面的部分:

二進制文件包含了對android linux Mac OS X,和windows。對於特定子模塊或者平台的jar包可以通過 Maven Central Repository 來單獨包含。

我們也可以自動的下載安裝通過下面的配置:

  • Maven (inside the pom.xml file)
  <dependency>
    <groupId>org.bytedeco</groupId> <artifactId>javacv</artifactId> <version>1.2</version> </dependency>
  • Gradle (inside the build.gradle file)
  repositories {
    mavenCentral()
  }
  dependencies {
    compile group: 'org.bytedeco', name: 'javacv', version: '1.2' }
  • sbt (inside the build.sbt file)
  classpathTypes += "maven-plugin" libraryDependencies += "org.bytedeco" % "javacv" % "1.2"

補充一點:我們需要設置javacpp.platform 系統屬性(通過 -D 命令行選項)像android-arm,或者設置javacpp.platform.dependencies 來真實的得到所有對於android linux Mac OS X 和windows 的二進制文件。如果建立的系統不工作我們需要手動增加 平台指定 人工產品。舉個例子 在Gradle 和 sbt 請參考 README.md file of the JavaCPP Presets.另外一個對於Scala用的可靠選項是  sbt-javacv.

需要軟件:
為了使用javaCv,你需要首先安裝下面的軟件:
一個實現java SE 7或者更高版本的:
進一步,盡管不總是需要,一些javaCV還是需要一些功能性上的支持:

最后,請確保每個軟件都是相同進制位的:32位和64位模塊,任何時候都不要混淆

手動安裝:

把所有需要的jar文件 (opencv*.jarffmpeg*.jar, etc.)包括javacpp.jar and javacv.jar,一些在你的類路徑里面,這是一些特定的指示對於大多數用例來講:

NetBeans (Java SE 7 or newer):

1、window 選項,在工程上的Libraries部分右鍵, 選擇:“Add JAR/Folder...”

2、定位jar包,選擇他們,點擊OK

Eclipse (Java SE 7 or newer):

1、導航欄:工程>屬性>java Build Path> Libraries 點擊 Add External JARs...

2、定位 jar文件,選擇他們,點擊ok

IntelliJ IDEA (Android 4.0 or newer):

1、遵循這一頁的指示: http://developer.android.com/training/basics/firstapp/

2、復制所有的jar包到app/libs 子目錄

3、導航到 File >工程結構> app>依賴, 點擊+ 然后選擇 2文件依賴

4、從libs 子目錄中選擇所有jar文件

之后,舉個例子來自opencv和FFmpeg的封裝類就會自動的連接他們的C/C++ APIs:

一些用例:

類的定義基本上都是在C / C++頭文件的原始java端口,我特意決定盡可能保持原來的語法。例如,這里是一個方法,試圖加載一個圖像文件,平滑它,並將其保存到磁盤:

import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_imgcodecs.*;

public class Smoother {
    public static void smooth(String filename) { 
        IplImage image = cvLoadImage(filename);
        if (image != null) {
            cvSmooth(image, image);
            cvSaveImage(filename, image);
            cvReleaseImage(image);
        }
    }
}

javacv在OpenCV和ffmpeg的上層還有輔助類和方法協助他們融入到java平台。這里是一個小的演示程序,展示了最常用的部分:

import java.io.File;
import java.net.URL;
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.*;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_calib3d.*;
import static org.bytedeco.javacpp.opencv_objdetect.*;

public class Demo {
    public static void main(String[] args) throws Exception {
        String classifierName = null;
        if (args.length > 0) {
            classifierName = args[0];
        } else {
            URL url = new URL("https://raw.github.com/Itseez/opencv/2.4.0/data/haarcascades/haarcascade_frontalface_alt.xml");
            File file = Loader.extractResource(url, null, "classifier", ".xml");
            file.deleteOnExit();
            classifierName = file.getAbsolutePath();
        }

        // Preload the opencv_objdetect module to work around a known bug.
        Loader.load(opencv_objdetect.class);

        // We can "cast" Pointer objects by instantiating a new object of the desired class.
        CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(cvLoad(classifierName));
        if (classifier.isNull()) {
            System.err.println("Error loading classifier file \"" + classifierName + "\".");
            System.exit(1);
        }

        // The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),
        // DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
        // PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
        FrameGrabber grabber = FrameGrabber.createDefault(0);
        grabber.start();

        // CanvasFrame, FrameGrabber, and FrameRecorder use Frame objects to communicate image data.
        // We need a FrameConverter to interface with other APIs (Android, Java 2D, or OpenCV).
        OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();

        // FAQ about IplImage and Mat objects from OpenCV:
        // - For custom raw processing of data, createBuffer() returns an NIO direct
        //   buffer wrapped around the memory pointed by imageData, and under Android we can
        //   also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
        // - To get a BufferedImage from an IplImage, or vice versa, we can chain calls to
        //   Java2DFrameConverter and OpenCVFrameConverter, one after the other.
        // - Java2DFrameConverter also has static copy() methods that we can use to transfer
        //   data more directly between BufferedImage and IplImage or Mat via Frame objects.
        IplImage grabbedImage = converter.convert(grabber.grab());
        int width  = grabbedImage.width();
        int height = grabbedImage.height();
        IplImage grayImage    = IplImage.create(width, height, IPL_DEPTH_8U, 1);
        IplImage rotatedImage = grabbedImage.clone();

        // Objects allocated with a create*() or clone() factory method are automatically released
        // by the garbage collector, but may still be explicitly released by calling release().
        // You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc. on objects allocated this way.
        CvMemStorage storage = CvMemStorage.create();

        // The OpenCVFrameRecorder class simply uses the CvVideoWriter of opencv_videoio,
        // but FFmpegFrameRecorder also exists as a more versatile alternative.
        FrameRecorder recorder = FrameRecorder.createDefault("output.avi", width, height);
        recorder.start();

        // CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
        // It can also switch into full-screen mode when called with a screenNumber.
        // We should also specify the relative monitor/camera response for proper gamma correction.
        CanvasFrame frame = new CanvasFrame("Some Title", CanvasFrame.getDefaultGamma()/grabber.getGamma());

        // Let's create some random 3D rotation...
        CvMat randomR = CvMat.create(3, 3), randomAxis = CvMat.create(3, 1);
        // We can easily and efficiently access the elements of matrices and images
        // through an Indexer object with the set of get() and put() methods.
        DoubleIndexer Ridx = randomR.createIndexer(), axisIdx = randomAxis.createIndexer();
        axisIdx.put(0, (Math.random()-0.5)/4, (Math.random()-0.5)/4, (Math.random()-0.5)/4);
        cvRodrigues2(randomAxis, randomR, null);
        double f = (width + height)/2.0;  Ridx.put(0, 2, Ridx.get(0, 2)*f);
                                          Ridx.put(1, 2, Ridx.get(1, 2)*f);
        Ridx.put(2, 0, Ridx.get(2, 0)/f); Ridx.put(2, 1, Ridx.get(2, 1)/f);
        System.out.println(Ridx);

        // We can allocate native arrays using constructors taking an integer as argument.
        CvPoint hatPoints = new CvPoint(3);

        while (frame.isVisible() && (grabbedImage = converter.convert(grabber.grab())) != null) {
            cvClearMemStorage(storage);

            // Let's try to detect some faces! but we need a grayscale image...
            cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
            CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
                    1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH);
            int total = faces.total();
            for (int i = 0; i < total; i++) {
                CvRect r = new CvRect(cvGetSeqElem(faces, i));
                int x = r.x(), y = r.y(), w = r.width(), h = r.height();
                cvRectangle(grabbedImage, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0);

                // To access or pass as argument the elements of a native array, call position() before.
                hatPoints.position(0).x(x-w/10)   .y(y-h/10);
                hatPoints.position(1).x(x+w*11/10).y(y-h/10);
                hatPoints.position(2).x(x+w/2)    .y(y-h/2);
                cvFillConvexPoly(grabbedImage, hatPoints.position(0), 3, CvScalar.GREEN, CV_AA, 0);
            }

            // Let's find some contours! but first some thresholding...
            cvThreshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);

            // To check if an output argument is null we may call either isNull() or equals(null).
            CvSeq contour = new CvSeq(null);
            cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),
                    CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
            while (contour != null && !contour.isNull()) {
                if (contour.elem_size() > 0) {
                    CvSeq points = cvApproxPoly(contour, Loader.sizeof(CvContour.class),
                            storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
                    cvDrawContours(grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
                }
                contour = contour.h_next();
            }

            cvWarpPerspective(grabbedImage, rotatedImage, randomR);

            Frame rotatedFrame = converter.convert(rotatedImage);
            frame.showImage(rotatedFrame);
            recorder.record(rotatedFrame);
        }
        frame.dispose();
        recorder.stop();
        grabber.stop();
    }
}

進一步:通過下面的內容創建一個pom.xml:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.bytedeco.javacv</groupId>
    <artifactId>demo</artifactId>
    <version>1.2</version>
    <dependencies>
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>javacv</artifactId>
            <version>1.2</version>
        </dependency>
    </dependencies>
</project>

通過把源代碼放到:src/main/java/Demo.java里面:我們可以使用下面的命令在Maven中來讓所有內容自動安裝並執行:

$ mvn package exec:java -Dexec.mainClass=Demo

構建說明:

如果上面的二進制文件不能滿足你的需要,你可能需要重新構建源碼,為此項目需要重建:

一旦安裝,對於javaCPP 和它的預置 和javaCV僅需要調用mvn install 命令。默認情況下,比起C++javaCPP不需要其他的依賴項,請參考pom.xml文件里面的注釋來獲得更多細節。

Project lead(項目領導): Samuel Audet samuel.audet at gmail.com
Developer site(開發者站點): https://github.com/bytedeco/javacv
Discussion group(討論組): http://groups.google.com/group/javacv





免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM