wasm示例 js canvas 動畫示例


利用wasm繪制圖的一些參考:

fhtr.org/gravityring/sprites.html

用Canvas + WASM畫一個迷宮 - 知乎 (zhihu.com)

WebGL 重置畫布尺寸 (webglfundamentals.org)

 


 canvaskit demo 

https://demos.skia.org/demo/

src\third_party\skia\demos.skia.org\Makefile

根據它可以本地啟動示例:python -m SimpleHTTPServer 8123

訪問:http://localhost:8123/demos/hello_world/index.html

如果需要下載的js訪問不到,替換成本地的:   <script type="text/javascript" src="https://unpkg.com/canvaskit-wasm@latest/bin/full/canvaskit.js"></script>

 

 

skp 在線的背后源碼:

E:\dev\chromium96\src\third_party\skia\modules\canvaskit\wasm_tools

目錄:
E:\dev\chromium96\src\third_party\skia\experimental\wasm-skp-debugger
E:\dev\chromium96\src\third_party\skia\tools\debugger

關聯:
E:\dev\chromium96\src\third_party\skia\experimental\wasm-skp-debugger\debugger_bindings.cpp

#include "tools/debugger/DebugCanvas.h"

#include "tools/debugger/DebugLayerManager.h"

 

debuggerz網站源碼???:

third_party/skia/modules/canvaskit/debugger_bindings.cpp

這里封裝了 供js去調用

class SkpDebugPlayer {
  public:
    SkpDebugPlayer() :
    udm(UrlDataManager(SkString("/data"))){}
。。。。。。

}
View Code

類里面的方法

    /* loadSkp deserializes a skp file that has been copied into the shared WASM memory.
     * cptr - a pointer to the data to deserialize.
     * length - length of the data in bytes.
     * The caller must allocate the memory with M._malloc where M is the wasm module in javascript
     * and copy the data into M.buffer at the pointer returned by malloc.
     *
     * uintptr_t is used here because emscripten will not allow binding of functions with pointers
     * to primitive types. We can instead pass a number and cast it to whatever kind of
     * pointer we're expecting.
     *
     * Returns an error string which is populated in the case that the file cannot be read.
     */
    std::string loadSkp(uintptr_t cptr, int length) {
      const uint8_t* data = reinterpret_cast<const uint8_t*>(cptr);
      // Both traditional and multi-frame skp files have a magic word
      SkMemoryStream stream(data, length);
      SkDebugf("make stream at %p, with %d bytes\n",data, length);
      const bool isMulti = memcmp(data, kMultiMagic, sizeof(kMultiMagic) - 1) == 0;


      if (isMulti) {
        SkDebugf("Try reading as a multi-frame skp\n");
        const auto& error = loadMultiFrame(&stream);
        if (!error.empty()) { return error; }
      } else {
        SkDebugf("Try reading as single-frame skp\n");
        // TODO(nifong): Rely on SkPicture's return errors once it provides some.
        frames.push_back(loadSingleFrame(&stream));
      }
      return "";
    }
View Code

 

 js的調用:src\third_party\skia\experimental\wasm-skp-debugger\tests\startup.spec.js

    it('can load and draw a skp file on a Web GL canvas', function(done) {
        LoadDebugger.then(catchException(done, () => {
            const surface = Debugger.MakeWebGLCanvasSurface(
                document.getElementById('debugger_view'));

            fetch('/debugger/sample.skp').then(function(response) {
                // Load test file
                if (!response.ok) {
                  throw new Error("HTTP error, status = " + response.status);
                }
                response.arrayBuffer().then(function(buffer) {
                    const fileContents = new Uint8Array(buffer);
                    console.log('fetched /debugger/sample.skp');
                    const player = Debugger.SkpFilePlayer(fileContents);
                    // Draw picture
                    player.drawTo(surface, 789); // number of commands in sample file
                    surface.flush();

                    console.log('drew picture to canvas element');
                    surface.dispose();
                    done();
                });
              });
        }));
    });
View Code

 

third_party/skia/tools/debugger/DebugCanvas.h

畫多個skp,背景透明問題

debugger_bindings.cpp中的代碼:

    /* drawTo asks the debug canvas to draw from the beginning of the picture
     * to the given command and flush the canvas.
     */
    void drawTo(SkSurface* surface, int32_t index) {
      // Set the command within the frame or layer event being drawn.
      if (fInspectedLayer >= 0) {
        fLayerManager->setCommand(fInspectedLayer, fp, index);
      } else {
        index = constrainFrameCommand(index);
      }

      auto* canvas = surface->getCanvas();
      canvas->clear(SK_ColorTRANSPARENT);
      if (fInspectedLayer >= 0) {
        // when it's a layer event we're viewing, we use the layer manager to render it.
        fLayerManager->drawLayerEventTo(surface, fInspectedLayer, fp);
      } else {
        // otherwise, its a frame at the top level.
        frames[fp]->drawTo(surface->getCanvas(), index);
      }
      surface->flush();
    }

    // Draws to the end of the current frame.
    void draw(SkSurface* surface) {
      auto* canvas = surface->getCanvas();
      canvas->clear(SK_ColorTRANSPARENT);
      frames[fp]->draw(surface->getCanvas());
      surface->getCanvas()->flush();
    }

 

    /**
        Executes all draw calls to the canvas.
        @param canvas  The canvas being drawn to
     */
    void draw(SkCanvas* canvas);

    /**
        Executes the draw calls up to the specified index.
        Does not clear the canvas to transparent black first,
        if needed, caller should do that first.
        @param canvas  The canvas being drawn to
        @param index  The index of the final command being executed
        @param m an optional Mth gpu op to highlight, or -1
     */
    void drawTo(SkCanvas* canvas, int index, int m = -1);

對上面頭文件的實現:third_party/skia/tools/debugger/DebugCanvas.cpp

void DebugCanvas::drawTo(SkCanvas* originalCanvas, int index, int m) {
    SkASSERT(!fCommandVector.isEmpty());
    SkASSERT(index < fCommandVector.count());

    int saveCount = originalCanvas->save();

    originalCanvas->resetMatrix();
    SkCanvasPriv::ResetClip(originalCanvas);

    DebugPaintFilterCanvas filterCanvas(originalCanvas);
    SkCanvas* finalCanvas = fOverdrawViz ? &filterCanvas : originalCanvas;

#if SK_GPU_V1
    auto dContext = GrAsDirectContext(finalCanvas->recordingContext());

    // If we have a GPU backend we can also visualize the op information
    GrAuditTrail* at = nullptr;
    if (fDrawGpuOpBounds || m != -1) {
        // The audit trail must be obtained from the original canvas.
        at = this->getAuditTrail(originalCanvas);
    }
#endif

    for (int i = 0; i <= index; i++) {
#if SK_GPU_V1
        GrAuditTrail::AutoCollectOps* acb = nullptr;
        if (at) {
            // We need to flush any pending operations, or they might combine with commands below.
            // Previous operations were not registered with the audit trail when they were
            // created, so if we allow them to combine, the audit trail will fail to find them.
            if (dContext) {
                dContext->flush();
            }
            acb = new GrAuditTrail::AutoCollectOps(at, i);
        }
#endif
        if (fCommandVector[i]->isVisible()) {
            fCommandVector[i]->execute(finalCanvas);
        }
#if SK_GPU_V1
        if (at && acb) {
            delete acb;
        }
#endif
    }

    if (SkColorGetA(fClipVizColor) != 0) {
        finalCanvas->save();
        SkPaint clipPaint;
        clipPaint.setColor(fClipVizColor);
        finalCanvas->drawPaint(clipPaint);
        finalCanvas->restore();
    }

    fMatrix = finalCanvas->getLocalToDevice();
    fClip   = finalCanvas->getDeviceClipBounds();
    if (fShowOrigin) {
        const SkPaint originXPaint = SkPaint({1.0, 0, 0, 1.0});
        const SkPaint originYPaint = SkPaint({0, 1.0, 0, 1.0});
        // Draw an origin cross at the origin before restoring to assist in visualizing the
        // current matrix.
        drawArrow(finalCanvas, {-50, 0}, {50, 0}, originXPaint);
        drawArrow(finalCanvas, {0, -50}, {0, 50}, originYPaint);
    }
    finalCanvas->restoreToCount(saveCount);

    if (fShowAndroidClip) {
        // Draw visualization of android device clip restriction
        SkPaint androidClipPaint;
        androidClipPaint.setARGB(80, 255, 100, 0);
        finalCanvas->drawRect(fAndroidClip, androidClipPaint);
    }

#if SK_GPU_V1
    // draw any ops if required and issue a full reset onto GrAuditTrail
    if (at) {
        // just in case there is global reordering, we flush the canvas before querying
        // GrAuditTrail
        GrAuditTrail::AutoEnable ae(at);
        if (dContext) {
            dContext->flush();
        }

        // we pick three colorblind-safe colors, 75% alpha
        static const SkColor kTotalBounds     = SkColorSetARGB(0xC0, 0x6A, 0x3D, 0x9A);
        static const SkColor kCommandOpBounds = SkColorSetARGB(0xC0, 0xE3, 0x1A, 0x1C);
        static const SkColor kOtherOpBounds   = SkColorSetARGB(0xC0, 0xFF, 0x7F, 0x00);

        // get the render target of the top device (from the original canvas) so we can ignore ops
        // drawn offscreen
        GrRenderTargetProxy* rtp = SkCanvasPriv::TopDeviceTargetProxy(originalCanvas);
        GrSurfaceProxy::UniqueID proxyID = rtp->uniqueID();

        // get the bounding boxes to draw
        SkTArray<GrAuditTrail::OpInfo> childrenBounds;
        if (m == -1) {
            at->getBoundsByClientID(&childrenBounds, index);
        } else {
            // the client wants us to draw the mth op
            at->getBoundsByOpsTaskID(&childrenBounds.push_back(), m);
        }
        // Shift the rects half a pixel, so they appear as exactly 1px thick lines.
        finalCanvas->save();
        finalCanvas->translate(0.5, -0.5);
        SkPaint paint;
        paint.setStyle(SkPaint::kStroke_Style);
        paint.setStrokeWidth(1);
        for (int i = 0; i < childrenBounds.count(); i++) {
            if (childrenBounds[i].fProxyUniqueID != proxyID) {
                // offscreen draw, ignore for now
                continue;
            }
            paint.setColor(kTotalBounds);
            finalCanvas->drawRect(childrenBounds[i].fBounds, paint);
            for (int j = 0; j < childrenBounds[i].fOps.count(); j++) {
                const GrAuditTrail::OpInfo::Op& op = childrenBounds[i].fOps[j];
                if (op.fClientID != index) {
                    paint.setColor(kOtherOpBounds);
                } else {
                    paint.setColor(kCommandOpBounds);
                }
                finalCanvas->drawRect(op.fBounds, paint);
            }
        }
        finalCanvas->restore();
        this->cleanupAuditTrail(at);
    }
#endif
}
View Code

third_party/blink/renderer/modules/canvas/canvas2d/base_rendering_context_2d.cc

double y,
double width,
double height,
bool for_reset
 
Webgl初始參數:
/**
 * Options for configuring a WebGL context. If an option is omitted, a sensible default will
 * be used. These are defined by the WebGL standards.
 */
export interface WebGLOptions {
    alpha?: number;
    antialias?: number;
    depth?: number;
    enableExtensionsByDefault?: number;
    explicitSwapControl?: number;
    failIfMajorPerformanceCaveat?: number;
    majorVersion?: number;
    minorVersion?: number;
    preferLowPowerToHighPerformance?: number;
    premultipliedAlpha?: number;
    preserveDrawingBuffer?: number;
    renderViaOffscreenBackBuffer?: number;
    stencil?: number;
}
View Code

 


 surface 與 canvas示例:

C:\dev\skia_source\modules\canvaskit\npm_build\multicanvas.html

C:\dev\skia_source\modules\canvaskit\npm_build\types\canvaskit-wasm-tests.ts

 

一個 surfaceTests js代碼:

function surfaceTests(CK: CanvasKit, gl?: WebGLRenderingContext) {
    if (!gl) {
        return;
    }
    const canvasEl = document.querySelector('canvas') as HTMLCanvasElement;
    const surfaceOne = CK.MakeCanvasSurface(canvasEl)!; // $ExpectType Surface
    const surfaceTwo = CK.MakeCanvasSurface('my_canvas')!;
    const surfaceThree = CK.MakeSWCanvasSurface(canvasEl)!; // $ExpectType Surface
    const surfaceFour = CK.MakeSWCanvasSurface('my_canvas')!;
    const surfaceFive = CK.MakeWebGLCanvasSurface(canvasEl, // $ExpectType Surface
        CK.ColorSpace.SRGB, {
        majorVersion: 2,
        preferLowPowerToHighPerformance: 1,
    })!;
    const surfaceSix = CK.MakeWebGLCanvasSurface('my_canvas', CK.ColorSpace.DISPLAY_P3, {
        enableExtensionsByDefault: 2,
    })!;
    const surfaceSeven = CK.MakeSurface(200, 200)!; // $ExpectType Surface
    const m = CK.Malloc(Uint8Array, 5 * 5 * 4);
    const surfaceEight = CK.MakeRasterDirectSurface({
        width: 5,
        height: 5,
        colorType: CK.ColorType.RGBA_8888,
        alphaType: CK.AlphaType.Premul,
        colorSpace: CK.ColorSpace.SRGB,
    }, m, 20);

    surfaceOne.flush();
    const canvas = surfaceTwo.getCanvas(); // $ExpectType Canvas
    const ii = surfaceThree.imageInfo(); // $ExpectType ImageInfo
    const h = surfaceFour.height(); // $ExpectType number
    const w = surfaceFive.width(); // $ExpectType number
    const subsurface = surfaceOne.makeSurface(ii); // $ExpectType Surface
    const isGPU = subsurface.reportBackendTypeIsGPU(); // $ExpectType boolean
    const count = surfaceThree.sampleCnt(); // $ExpectType number
    const img = surfaceFour.makeImageSnapshot([0, 3, 2, 5]); // $ExpectType Image
    const img2 = surfaceSix.makeImageSnapshot(); // $ExpectType Image
    const img3 = surfaceFour.makeImageFromTexture(gl.createTexture()!, {
      height: 40,
      width: 80,
      colorType: CK.ColorType.RGBA_8888,
      alphaType: CK.AlphaType.Unpremul,
      colorSpace: CK.ColorSpace.SRGB,
    });
    const img4 = surfaceFour.makeImageFromTextureSource(new Image()); // $ExpectType Image | null
    const videoEle = document.createElement('video');
    const img5 = surfaceFour.makeImageFromTextureSource(videoEle, {
      height: 40,
      width: 80,
      colorType: CK.ColorType.RGBA_8888,
      alphaType: CK.AlphaType.Unpremul,
    });
    const img6 = surfaceFour.makeImageFromTextureSource(new ImageData(40, 80)); // $ExpectType Image | null

    surfaceSeven.delete();

    const ctx = CK.GetWebGLContext(canvasEl); // $ExpectType number
    CK.deleteContext(ctx);
    const grCtx = CK.MakeGrContext(ctx);
    const surfaceNine = CK.MakeOnScreenGLSurface(grCtx!, 100, 400, // $ExpectType Surface
        CK.ColorSpace.ADOBE_RGB)!;

    const rt = CK.MakeRenderTarget(grCtx!, 100, 200); // $ExpectType Surface | null
    const rt2 = CK.MakeRenderTarget(grCtx!, { // $ExpectType Surface | null
        width: 79,
        height: 205,
        colorType: CK.ColorType.RGBA_8888,
        alphaType: CK.AlphaType.Premul,
        colorSpace: CK.ColorSpace.SRGB,
    });

    const drawFrame = (canvas: Canvas) => {
        canvas.clear([0, 0, 0, 0]);
    };
    surfaceFour.requestAnimationFrame(drawFrame);
    surfaceFour.drawOnce(drawFrame);
}
View Code

 

示例,在canvas上做個imageInfo,獲取到改變尺寸的surface,看看它是不是gpu。c++

void draw(SkCanvas* canvas) {
    sk_sp<SkSurface> surface = SkSurface::MakeRasterN32Premul(5, 6);
    SkCanvas* smallCanvas = surface->getCanvas();
    SkImageInfo imageInfo = SkImageInfo::MakeN32Premul(10, 14);
    sk_sp<SkSurface> compatible = smallCanvas->makeSurface(imageInfo);
    SkDebugf("compatible %c= nullptr\n", compatible == nullptr ? '=' : '!');
    SkDebugf("size = %d, %d\n", compatible->width(), compatible->height());

???js:
const isGPU = subsurface.reportBackendTypeIsGPU(); // $ExpectType boolean
}

一個給canvaskit 報告的bug 寫的測試示例,這個bug已經關閉:

let htmlCanvas;
let skCanvas;
let skSurface;
const paint = new CanvasKit.Paint();

function getCanvasLayer(w,h) {
  htmlCanvas = document.getElementById("canvas");
  console.log("Canvas class: %s", htmlCanvas.constructor.name);
  htmlCanvas.height = h;
  htmlCanvas.width = w;
}

function prepareSurface(w, h) {
  if (skSurface && !skSurface.isDeleted()) {
    skSurface.dispose();
    console.log('Disposed surface');
  }
  const context = htmlCanvas.getContext("2d");
  skSurface = CanvasKit.MakeWebGLCanvasSurface(htmlCanvas);
  if (!skSurface) {
    console.log('Failed to make surface');
  }
}

function drawOffscreenCanvas(skps, w,h) {
  let picture = CanvasKit.MakePicture(skps);
  skCanvas = skSurface.getCanvas();
  skCanvas.save();
  skCanvas.drawPicture(picture);
  skCanvas.restore();
  picture.delete();
}

function flushOffscreenCanvas(w,h) {
  skSurface.flush();

  // Here is something interesting, remove line 19 and call line 20, after MakeWebGLCanvasSurface(htmlCanvas)
  // htmlCanvas.getContext("2d") returns null context.
  // htmlCanvas.getContext("webgl") returns null context.
  // htmlCanvas.getContext("webgl2") return a valid WebGL2RenderingContext.
  // Now if we move this getContext before MakeWebGLCanvasSurface, all 3
  // context "2d", "webgl" and "webgl2" return a valid context but 
  // MakeWebGLCanvasSurface will throw an error:
  // Uncaught (in promise) TypeError: Cannot read property 'version' of undefined

  //const context = htmlCanvas.getContext("webgl");
  //console.log("Context class: %s", context.constructor.name);
}

function drawFrame(skps) {
  const canvasLayerWidth = 3000;
  const canvasLayerHeight = 3000;
  const w = 1000;
  const h = 1000;
  getCanvasLayer(canvasLayerWidth,canvasLayerHeight);
  prepareSurface(w,h);
  drawOffscreenCanvas(skps,w,h);
  flushOffscreenCanvas(w,h); 
}

fetch('url', {
  'mode': 'cors'
})
  .then(response => response.blob())
  .then(blob => blob.arrayBuffer())
  .then(skpic => {
      console.log(skpic)
      drawFrame(skpic);
  });
View Code

 


msn 搜索:mdn canvas

 

canvas tutorial 中文版 Canvas - Web API 接口參考 | MDN (mozilla.org) 英文版

Game development | MDN (mozilla.org) canvas開發游戲示例

 

跨越保存圖片,將圖片畫到canvas上,然后保存:Allowing cross-origin use of images and canvas - HTML: HyperText Markup Language | MDN (mozilla.org)

(16條消息) 手把手地教你怎么用canvas的rotate做出類似太陽系(包括月球的公轉)的嵌套運動_TNTNT_T的博客-CSDN博客

3d迷宮移動:https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/A_basic_ray-caster

開源 Web 技術示例

 

Canvas c++ 實現

C:\dev\chromium104\src\third_party\blink\renderer\core\html\canvas\html_canvas_element.cc

HTMLCanvasElement::CreateLayer

HTMLCanvasElement::Paint 和 HTMLCanvasElement::PaintInternal 繪制2d,用skp或者image snapshot。需要unacclerate,不用gpu加速。

snapshot = snapshot->MakeUnaccelerated();

webgl:

  if (IsWebGL() && PaintsIntoCanvasBuffer())
    context_->MarkLayerComposited();

 HTMLCanvasElement::Snapshot 可以對2d或者webgl快照出 image。

查看代碼
 scoped_refptr<StaticBitmapImage> HTMLCanvasElement::Snapshot(
    SourceDrawingBuffer source_buffer) const {
  if (size_.IsEmpty())
    return nullptr;

  scoped_refptr<StaticBitmapImage> image_bitmap;
  if (OffscreenCanvasFrame()) {  // Offscreen Canvas
    DCHECK(OffscreenCanvasFrame()->OriginClean());
    image_bitmap = OffscreenCanvasFrame()->Bitmap();
  } else if (IsWebGL()) {
    if (context_->CreationAttributes().premultiplied_alpha) {
      context_->PaintRenderingResultsToCanvas(source_buffer);
      if (ResourceProvider())
        image_bitmap = ResourceProvider()->Snapshot();
    } else {
      sk_sp<SkData> pixel_data =
          context_->PaintRenderingResultsToDataArray(source_buffer);
      if (pixel_data) {
        // If the accelerated canvas is too big, there is a logic in WebGL code
        // path that scales down the drawing buffer to the maximum supported
        // size. Hence, we need to query the adjusted size of DrawingBuffer.
        gfx::Size adjusted_size = context_->DrawingBufferSize();
        if (!adjusted_size.IsEmpty()) {
          SkColorInfo color_info =
              GetRenderingContextSkColorInfo().makeAlphaType(
                  kUnpremul_SkAlphaType);
          if (color_info.colorType() == kN32_SkColorType)
            color_info = color_info.makeColorType(kRGBA_8888_SkColorType);
          else
            color_info = color_info.makeColorType(kRGBA_F16_SkColorType);
          image_bitmap = StaticBitmapImage::Create(
              std::move(pixel_data),
              SkImageInfo::Make(
                  SkISize::Make(adjusted_size.width(), adjusted_size.height()),
                  color_info));
        }
      }
    }
  } else if (context_) {
    DCHECK(IsRenderingContext2D() || IsImageBitmapRenderingContext() ||
           IsWebGPU());
    image_bitmap = context_->GetImage();
  }

  if (image_bitmap)
    DCHECK(image_bitmap->SupportsDisplayCompositing());
  else
    image_bitmap = CreateTransparentImage(size_);

  return image_bitmap;
}

HTMLCanvasElement::toDataURL 生成可以在html中寫的<image src"data;xx>圖片資源。

HTMLCanvasElement::toBlob 都對應有 canvas的js函數吧。

決定是否cpu or gpu 繪制canvas:

   // If the canvas meets the criteria to use accelerated-GPU rendering, and
    // the user signals that the canvas will not be read frequently through
    // getImageData, which is a slow operation with GPU, the canvas will try to
    // use accelerated-GPU rendering.
    // If any of the two conditions fails, or if the creation of accelerated
    // resource provider fails, the canvas will fallback to CPU rendering.
    UMA_HISTOGRAM_BOOLEAN(
        "Blink.Canvas.2DLayerBridge.WillReadFrequently",
        context_ && context_->CreationAttributes().will_read_frequently);

    if (ShouldAccelerate() && context_ &&
        !context_->CreationAttributes().will_read_frequently) {
      canvas2d_bridge_ = Create2DLayerBridge(RasterMode::kGPU);
    }
    if (!canvas2d_bridge_) {
      canvas2d_bridge_ = Create2DLayerBridge(RasterMode::kCPU);
    }

GetSourceImageForCanvas

通知image變化:HTMLCanvasElement::NotifyListenersCanvasChanged()

阻止不支持webgl:HTMLCanvasElement::IsWebGLBlocked()

 

webgl代碼:src\third_party\blink\renderer\modules\webgl

canvas代碼:src\third_party\blink\renderer\modules\canvas 這個目錄的 README.md

 

C:\dev\chromium104\src\third_party\blink\renderer\core\paint\html_canvas_painter.cc

RecordForeignLayer

c:\dev\chromium104\src\third_party\blink\renderer\platform\graphics\paint\foreign_layer_display_item.cc


about://gpu 可以查看目前canvas的gpu開啟沒有。

What to Know

  • In Chrome, go to Chrome Menu > Settings > Advanced. Under System, enable Use hardware acceleration when available.這個被關掉后,其他開關都打不開gpu了。
  • To force acceleration, enter chrome://flags in the search bar. Under Override software rendering list, set to Enabled, then select Relaunch.  忽略設置的cpu渲染,強制gpu。
  • You can check whether hardware acceleration is turned on in Chrome by typing 

    chrome://gpu

     into the address bar at the top of the browser.

 

chrome在headless模式,是不啟動gpu模式的。可以通過chrome://inspect 打開監控的無頭瀏覽器,輸入 chrome://gpu查看,全是軟渲染。

Issue 765284: Support GPU hardware in headless mode

  • canvas 2d時:

在軟渲染時,canvas的繪制指令通過cpu生成成了 layer的picture, 即skp,可以獲取skp將其顯示。

而在gpu繪制時,生成layer是textureLayer, 通過外部gpu繪制,這時是沒有繪制指令的。無法通過skp重現。(可能是直接gpu繪制了?)

canvas webgl (canvas.getContext("webgl") 獲得。有2d,3d api。

在canvas是獲取的3d webgl上下文畫筆時,會需要swiftshader軟渲染。是必現生成texturelayer的。

而且某些情況查看layer時會崩潰(與這個網頁有關。和webgl無關,直接打開webgl示例可以正常顯示layers視圖),比如:Simple color animation - Web APIs | MDN (mozilla.org)

Canvas 是 HTML5 提供的一個特性,你可以把它當做一個載體,簡單的說就是一張白紙。而 Canvas 2D 相當於獲取了內置的二維圖形接口,也就是二維畫筆。Canvas 3D 是獲取基於 WebGL的圖形接口,相當於三維畫筆。你可以選擇不同的畫筆在上面作畫。

OpenGL是 底層的驅動級的圖形接口(是顯卡有直接關系的) 類似於 DirectX. 但是這種底層的 OpenGL 是 寄生於瀏覽器的JavaScript無法涉及的。但是為了讓 Web 擁有更強大的 圖形處理能力 2010年時候WebGL被推出來。WebGL 允許工程師使用JS 去調用部分封裝過的 OpenGL ES2.0 標准接口去 提供硬件級別的3D圖形加速功能。
Skia 是一個開源的2D圖形庫。SwiftShader是一個高性能的,基於CPU的OpenGLES和Direct3D圖形APIs的實現。它的目標是為高級3D圖形提供硬件獨立性。

 

WebGL - Web API 接口參考 | MDN (mozilla.org)

https://threejs.org/ webgl 3d 封裝庫

threejs入門

webgl教程

WebGL 從畫布中截屏

var gl = canvas.getContext("webgl",  {preserveDrawingBuffer: true});

face tracker


 

創建canvas和測試進入加速模式:

TEST_F(HTMLCanvasPainterTest, Canvas2DLayerAppearsInLayerTree) {
  // Insert a <canvas> and force it into accelerated mode.
  // Not using SetBodyInnerHTML() because we need to test before document
  // lifecyle update.
  GetDocument().body()->setInnerHTML("<canvas width=300 height=200>");
  auto* element = To<HTMLCanvasElement>(GetDocument().body()->firstChild());
  CanvasContextCreationAttributesCore attributes;
  attributes.alpha = true;
  CanvasRenderingContext* context =
      element->GetCanvasRenderingContext("2d", attributes);
  gfx::Size size(300, 200);
  std::unique_ptr<Canvas2DLayerBridge> bridge = MakeCanvas2DLayerBridge(size);
  element->SetResourceProviderForTesting(nullptr, std::move(bridge), size);
  ASSERT_EQ(context, element->RenderingContext());
  ASSERT_TRUE(context->IsComposited());
  ASSERT_TRUE(element->IsAccelerated());

  // Force the page to paint.
  element->PreFinalizeFrame();
  context->FinalizeFrame();
  element->PostFinalizeFrame();
  UpdateAllLifecyclePhasesForTest();

  // Fetch the layer associated with the <canvas>, and check that it was
  // correctly configured in the layer tree.
  const cc::Layer* layer = context->CcLayer();
  ASSERT_TRUE(layer);
  EXPECT_TRUE(HasLayerAttached(*layer));
  EXPECT_EQ(gfx::Size(300, 200), layer->bounds());
}

 

1, Home » Porting » Connecting C++ and JavaScript 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM