Implement a Skybox Texture Using Rust and WebGL | by Julien de Charentenay | May, 2022

Simulate vortex particules

Miramar skybox texture utilized to vortex particle simulation net viewer

This story appears to be like on the implementation of a skybox texture utilizing Rust and WebGL for the online viewer for vortex particle simulation described in this story. A skybox texture is “a field with textures on it […] to appear like what could be very distant together with the horizon” (WebGl2Fundamentals WebGL2 SkyBox).

The skybox offers depth and context to the WebGL scene. The skybox picture used is titled miramar by Hipshot — launched here.

The supply code is obtainable on GitHub and the reside model might be considered at CFD-webassembly.com.

The views/opinions expressed on this story are my very own. This story pertains to my private expertise and decisions and is supplied with data within the hope that will probably be helpful however with none guarantee.

I want to point out the next sources that made this implementation attainable:

The implementation led to ponder over the query: ought to the picture(s) be loaded in JavaScript after which transferred to Rust/WebAssembly or immediately in Rust? I selected the latter to maintain the coding pertaining to skybox inside one Rust file — program_skybox.rs.

Preliminaries

The next modifications — see this commit if — had been made because the skybox rendering requires entry to view and projection matrices individually:

  • digicam.rs is modified to permit the extraction of the view and projection matrices individually;
  • the View trait strategies draw and redraw are modified to take a Digicam struct instead of a Matrix4 struct.

Implementation

The skybox rendering is applied within the struct ProgramSkyBoxfile viewer/program_skybox.rsand implements the View trait.

The WebGL program, composed of vertex and fragment shaders is taken almost verbatim from WebGL2 SkyBox’s shaders and compiled utilizing the helper capabilities constructed beforehand. This system will get compiled when first wanted and is reused afterward.

The skybox rendering is applied within the redraw methodology — the draw methodology simply name redraw — because the rendering doesn’t depend upon the simulation state.

The implementation follows the identical steps as described within the WebGL2 SkyBox implementation however reorganized barely as follows:

  • Set the WebGL program to the present rendering context:
  • The vertex array buffer collects “the state of attributes, which buffers to make use of for each, and pull out knowledge from these buffers” (WebGL2 Fundamentals). It’s initialised and populated when first known as — and re-used at subsequent calls:
  • For the skybox implementation, we don’t convert 3D spatial coordinates into the view area, however as an alternative, a quad is created that covers the entire view area and makes use of the view area coordinates immediately. The feel is utilized to that quad primarily based on the view route and projection. The next code extract exhibits the definition and project of this quad, which is once more similar to the WebGL2 SkyBox implementation.
  • The textures utilized to the skybox, aka dice map, are outlined and utilized as follows utilizing the feel layer 0. Extra data on the implementation of the loading of the feel and the assign_textures methodology is supplied within the subsequent part. This step is the final step that’s executed solely throughout the first redraw name.
  • The next code is run at each redraw name because it pertains to the digicam route which will change between every redraw name. It makes use of the view matrix to outline the route of the digicam and combines it with the view projection to outline the connection between view area location and texture — though I do perceive the matrix operations (strains 10 to 14 swaps the y and z coordinates for instance), I don’t absolutely perceive how the inversion of the product of the projection and examine matrix pertains to the dice map texture. Therefore please confer with the WebGL2 SkyBox article for extra particulars on this matter…
  • Lastly, the rendering operate is known as after settings rendering parameters as per WebGL2 SkyBox implementation.

This covers the implementation of the WebGL rendering side, to the exception of the feel loading and the assign_texture methodology known as when the feel is utilized.

Texture loading and project

I discussed earlier that the feel picture(s) is(are) immediately loaded in Rust/WebAssembly. The article “How to load textures in Rust/WebGl” offers an strategy utilizing an HtmlImageElement factor and its related on_load callback factor — while mentioning that the dropping of the closure wants cautious consideration because it “is leaking reminiscence in rust”.

I selected to make use of the fetch API to load the picture content material after which convert this content material into an ImageBitmap. These capabilities (and others) use guarantees instead of callback and therefore alleviate the requirement for dropping closure. The JavaScript guarantees are handled as Rust futures utilizing the wasm-bindgen-futures crate — see Working with a JS Promise and a Rust Future within the wasm-bindgen information.

The picture fetch and conversion is undertaken throughout the creation of the ProgramSkyBox object — which is now an asynchronous operate. The asynchronous nature of the operate is uncovered again to JavaScript to permit the creation of a skybox view to be finished utilizing the next code:

The loading of textures utilizing the fetch API and conversion to an ImageBitmap is completed as proven beneath. A single fetch name is completed to load a single picture that mixes all 6 textures required to be utilized to the 6 faces of the dice map. The coordinates of the textures within the picture are saved utilizing CubeMapTexSxSySw struct.

The loaded ImageBitmap is used alongside the x , y and w parameters saved within the CubeMapTexSxSySw object to establish and apply the related part of the picture to each dice map textures within the assign_textures methodology. The related part of the picture is extracted utilizing an HtmlCanvasElement. I initially tried utilizing the texSubImage2d methodology however had no success — equally to the unique poster question on StackOverflow.

More Posts