這里先給出zxing包的源碼地址
zip包:https://codeload.github.com/zxing/zxing/zip/master
Github:https://github.com/zxing/zxing
包可能較大,因為包含了其它平台的源碼,這里主要分析Android平台
首先說一下zxing包中掃描實現的是被固定為橫屏模式,在不同的手機屏幕下可能會出現圖像變形情況,近日得空,研究了一下,首先分析一下源碼Barcode scanner中的一些問題。
- 首先解釋設置為橫屏模式的原因,android手機中camera 在我們調用的時候,他的成像是被順時針旋轉90°的,在Barcode scanner中並沒有對camera旋轉,因此成像在橫向手機屏幕上才是正常方向,camera的成像一般來說,寬比高的值要大,如果想讓手機方向切換為其它模式時,若要顯示成像正常,則可以調用setDisplayOrientation(度數)方法,參數中應該設置的參數與getWindowManager().getDefaultDisplay().getRotation()中得到的不同,可參考下面的代碼。
public int getOrientationDegree() { int rotation = mActivity.getWindowManager().getDefaultDisplay().getRotation(); switch (rotation) { case Surface.ROTATION_0: return 90; case Surface.ROTATION_90: return 0; case Surface.ROTATION_180: return 270; case Surface.ROTATION_270: return 180; default: return 0; } }
-
關於部分手機屏幕可能會變形問題,大家知道不同的手機,攝像頭的像素數是不同的,這里列出部分1920x1080,1280x960,1280x720,960x720,864x480,800x480,720x480,768x432,640x480,576x432,480x320,384x288,352x288,320x240,這些列出的都是camera所支持的寬x高的大小,想知道camera支持哪些像素大小可通過Camera.Parameters.getSupportedPreviewSizes()方法得到支持的列表。camera成像以后要在surfaceView上顯示,成像會變形的原因就是camera設置的像素參數的大小,與surfaceView的長寬比不相同,camera在preview時,會自動填充滿surfaceView,導致我們看到的成像變形
- Camera在preview時,我們自然就可以去取我們看到的成像,當然我們只是取框中的一部分,那么我們就要計算框的位置及大小,下面先看源碼中的計算方法
<resources> <style name="CaptureTheme" parent="android:Theme.Holo"> <item name="android:windowFullscreen">true</item> <item name="android:windowContentOverlay">@null</item> <item name="android:windowActionBarOverlay">true</item> <item name="android:windowActionModeOverlay">true</item> </style> </resources>
<com.google.zxing.client.android.ViewfinderView android:id="@+id/viewfinder_view" android:layout_width="fill_parent" android:layout_height="fill_parent"/>
/** * Like {@link #getFramingRect} but coordinates are in terms of the preview frame, * not UI / screen. * * @return {@link Rect} expressing barcode scan area in terms of the preview size */ public synchronized Rect getFramingRectInPreview() { if (framingRectInPreview == null) { Rect framingRect = getFramingRect(); if (framingRect == null) { return null; } Rect rect = new Rect(framingRect); Point cameraResolution = configManager.getCameraResolution(); Point screenResolution = configManager.getScreenResolution(); if (cameraResolution == null || screenResolution == null) { // Called early, before init even finished return null; } rect.left = rect.left * cameraResolution.x / screenResolution.x; rect.right = rect.right * cameraResolution.x / screenResolution.x; rect.top = rect.top * cameraResolution.y / screenResolution.y; rect.bottom = rect.bottom * cameraResolution.y / screenResolution.y; framingRectInPreview = rect; } return framingRectInPreview; }
源碼中,我們可以看到應用的主題是全屏模式,因此計算取成像大小的時候用的是屏幕的大小,首先是得到屏幕中ViewfinderView的框的大小,也就是我們在屏幕上看到的大小,然后根據camera與屏幕的比值去計算camera中應該取圖像的大小。
/** * A factory method to build the appropriate LuminanceSource object based on the format * of the preview buffers, as described by Camera.Parameters. * * @param data A preview frame. * @param width The width of the image. * @param height The height of the image. * @return A PlanarYUVLuminanceSource instance. */ public PlanarYUVLuminanceSource buildLuminanceSource(byte[] data, int width, int height) { Rect rect = getFramingRectInPreview(); if (rect == null) { return null; } // Go ahead and assume it's YUV rather than die. return new PlanarYUVLuminanceSource(data, width, height, rect.left, rect.top, rect.width(), rect.height(), false); }
下面代碼是在CameraManager中的,該方法參數中data即camera取得的全部圖像數據width、height分別是圖像的寬與高,PlanarYUVLuminanceSource就是從整張圖片中取出getFramingRectInPreview()所返回的矩形大小的圖片數據。最后在對數據進行解析。如下代碼所示。
/** * Decode the data within the viewfinder rectangle, and time how long it took. For efficiency, * reuse the same reader objects from one decode to the next. * * @param data The YUV preview frame. * @param width The width of the preview frame. * @param height The height of the preview frame. */ private void decode(byte[] data, int width, int height) { long start = System.currentTimeMillis(); Result rawResult = null; PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height); if (source != null) { BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source)); try { rawResult = multiFormatReader.decodeWithState(bitmap); } catch (ReaderException re) { // continue } finally { multiFormatReader.reset(); } } Handler handler = activity.getHandler(); if (rawResult != null) { // Don't log the barcode contents for security. long end = System.currentTimeMillis(); Log.d(TAG, "Found barcode in " + (end - start) + " ms"); if (handler != null) { Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult); Bundle bundle = new Bundle(); bundleThumbnail(source, bundle); message.setData(bundle); message.sendToTarget(); } } else { if (handler != null) { Message message = Message.obtain(handler, R.id.decode_failed); message.sendToTarget(); } } }
-
解析完成以后,成功則發送成功的消息到CaptureActivityHandler中,失敗則發送失敗的消息,由下面的代碼可以看出,成功以后,回調到了acitvity,而失敗則重新請求正在Preview中的數據,如此反復解析。
case R.id.decode_succeeded: state = State.SUCCESS; Bundle bundle = message.getData(); Bitmap barcode = null; float scaleFactor = 1.0f; if (bundle != null) { byte[] compressedBitmap = bundle.getByteArray(DecodeThread.BARCODE_BITMAP); if (compressedBitmap != null) { barcode = BitmapFactory.decodeByteArray(compressedBitmap, 0, compressedBitmap.length, null); // Mutable copy: barcode = barcode.copy(Bitmap.Config.ARGB_8888, true); } scaleFactor = bundle.getFloat(DecodeThread.BARCODE_SCALED_FACTOR); } activity.handleDecode((Result) message.obj, barcode, scaleFactor); break; case R.id.decode_failed: // We're decoding as fast as possible, so when one decode fails, start another. state = State.PREVIEW; cameraManager.requestPreviewFrame(decodeThread.getHandler(), R.id.decode); break;
解析說明完,我們就來說說簡化修改后的情況。
- 第一點,關於旋轉屏幕的問題,上面知道了旋轉屏幕出現問題的原因,我們就知道如何來修改了,設置camera的旋轉角度就好了,調用setDisplayOrientation()方法,注意調用了setDisplayOrientation這個以后,只是camera在surfaceView上的成像旋轉了,他的寬與高並沒有改變,屏幕為正常情況豎屏時此時如果我們顯示出掃描到的圖片,我們可以看到,圖片是camera的原始成像,因此可以得知上述結論。由於旋轉了圖像,如果我們仍然以上述第3條中的方法去取數據的話,很有可能會越界,導致crash。如果我們預設的識別的窗口大小不是正方形,而是長方形的話,我們就可以看到,識別的窗口中顯示是正常圖像,可識別出來顯示的圖片卻是垂直方向的,因此,我們在取圖像時,應重新計算大小,及長寬。對原有代碼做如下修改。
public synchronized Rect getFramingRectInPreview() { if (framingRectInPreview == null) { Rect framingRect = ScanManager.getInstance().getViewfinderRect(); Point cameraResolution = configManager.getCameraResolution(); if (framingRect == null || cameraResolution == null || surfacePoint == null) { return null; } Rect rect = new Rect(framingRect); float scaleX = cameraResolution.x * 1.0f / surfacePoint.y; float scaleY = cameraResolution.y * 1.0f / surfacePoint.x; if (isPortrait) { rect.left = (int) (framingRect.top * scaleY); rect.right = (int) (framingRect.bottom * scaleY); rect.top = (int) (framingRect.left * scaleX); rect.bottom = (int) (framingRect.right * scaleX); } else { scaleX = cameraResolution.x * 1.0f / surfacePoint.x; scaleY = cameraResolution.y * 1.0f / surfacePoint.y; rect.left = (int) (framingRect.left * scaleX); rect.right = (int) (framingRect.right * scaleX); rect.top = (int) (framingRect.top * scaleY); rect.bottom = (int) (framingRect.bottom * scaleY); } framingRectInPreview = rect; } return framingRectInPreview; }
- 對於成像有變形的問題,最簡單的方法就是我們根據camera的大小去計算surfaceView的大小,先看修改后的代碼
public void initFromCameraParameters(Camera camera, Point maxPoint) { Camera.Parameters parameters = camera.getParameters(); Point size = new Point(maxPoint.y, maxPoint.x); cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, size); Log.i(TAG, "Camera resolution: " + cameraResolution); Log.i(TAG, "size resolution: " + size); }
public void findBestSurfacePoint(Point maxPoint) { Point cameraResolution = configManager.getCameraResolution(); if (cameraResolution == null || maxPoint == null || maxPoint.x == 0 || maxPoint.y == 0) return; double scaleX, scaleY, scale; if (maxPoint.x < maxPoint.y) { scaleX = cameraResolution.x * 1.0f / maxPoint.y; scaleY = cameraResolution.y * 1.0f / maxPoint.x; } else { scaleX = cameraResolution.x * 1.0f / maxPoint.x; scaleY = cameraResolution.y * 1.0f / maxPoint.y; } scale = scaleX > scaleY ? scaleX : scaleY; if (maxPoint.x < maxPoint.y) { surfacePoint.x = (int) (cameraResolution.y / scale); surfacePoint.y = (int) (cameraResolution.x / scale); } else { surfacePoint.x = (int) (cameraResolution.x / scale); surfacePoint.y = (int) (cameraResolution.y / scale); } }
在CameraConfigurationManager中camera初始化的時候,即initFromCameraParameters,我們會傳進去一個surfaceView支持的最大寬高,通過CameraConfigurationUtils.findBestPreviewSizeValue(parameters, size);我們能夠得到camera在surfaceView中最好的成像大小,此時,我們就使用這個大小,但由於這個大小是上述列表列出的固定大小,而surfaceView的寬高變化較多,因此有可能會使成像變形,因此,我們要根據camera返回的最好成像的大小去計算適合的surfaceView的大小,即最合適的比例,通過findBestSurfacePoint方法,我們可以實現。計算出來以后,只要重新設置surfaceView的大小即可,此時看到的圖像就不會再變形。
更多解析,這里不再綴述,具體過程看實現代碼,點擊下載 。
第一次寫blog,很多不好的地方,不正確的地方,歡迎大家指正,點評。
