June 11, 2020
The image effects you can do with WebGL is what gets me excited about this sort of stuff. The trouble is that there’s a bit of a learning curve and it’s certainly not easy. The good thing is that once you’ve got it figured out and set up, you can re-use your base and improve/adjust things as you learn.
I’m going to walk through two ways you can use images in your work. The first being a simple method that will provide a foundation for understanding how to move images from the DOM into WebGL-land; and the second will build off the first method and introduce the concept of shaders - the magic ingredient!
The first thing that you need to be aware of is that everything inside the canvas isn’t part of the regular flow of the DOM. This means that you’ll need to be sizing and positioning things yourself but this isn’t a bad thing. In fact it fits in perfectly with the idea of progressive-enhancement.
What we’ll do is render our images in the DOM as we would normally. Then we’ll grab them all and pass them into our canvas. Doing things this way means we know:
It also means that the images remain fully-accessible on the page and that we lose nothing of value if for some reason the WebGL doesn’t load or someone has their JavaScript turned off.
Let’s use some react context and a custom component to scoop up all of these images.
For context (heh) I follow the pattern explained by Kent C. Dodds.
// src/webgl/context.jsimport React, { useState, createContext, useContext } from 'react'const WebGLStateContext = createContext()const WebGLDispatchContext = createContext()export const WebGLProvider = ({ children }) => {const [state, dispatch] = useState([])return (<WebGLStateContext.Provider value={state}><WebGLDispatchContext.Provider value={dispatch}>{children}</WebGLDispatchContext.Provider></WebGLStateContext.Provider>)}export function useWebGLState() {const context = useContext(WebGLStateContext)if (context === undefined) {throw new Error('useWebGLState must be used within a WebGLProvider')}return context}export function useWebGLDispatch() {const context = useContext(WebGLDispatchContext)if (context === undefined) {throw new Error('useWebGLDispatch must be used within an WebGLProvider')}return context}
What we’ve done here is setup a react context that will hold all of the images that we’ll then use in our WebGL code.
Then to hook it up we’ll need to add <WebGLProvider />
somewhere, if you’re not sure where is best then I’d recommend the app root.
Next step is putting together a little component to grab any images we want to use. To keeps things simple I’m going to add this to our context file we’ve just setup.
// src/webgl/context.jsimport React, { useRef, useState } from 'react'export const WebGLImage = props => {const ref = useRef()const dispatch = useWebGLDispatch()const [loaded, setLoaded] = useState(false)const handleLoad = () => {setLoaded(true)dispatch(images => [...images, { data: ref.current }])}return (<imgalt=""{...props}ref={ref}onLoad={handleLoad}style={{opacity: loaded ? 0 : 1,}}/>)}
// src/webgl/canvas.jsimport React, { Suspense, useMemo } from 'react'import { Canvas, useLoader, useThree } from 'react-three-fiber'import { TextureLoader, LinearFilter, ClampToEdgeWrapping } from 'three'import { useWebGLState } from './context'// Next stepfunction Image() { ... }function WebGLCanvas() {const state = useWebGLState()return (<Canvasorthographicstyle={{height: '100vh',position: 'fixed',top: 0,right: 0,bottom: 0,left: 0,zIndex: 1,pointerEvents: 'none',transform: 'translateZ(0)',}}><Suspense fallback={null}>{state.map((image, i) => (<Image key={image.data.src} image={image} />))}</Suspense></Canvas>)}export default WebGLCanvas
Lets go over what’s happening here:
<Suspense />
is being used to deal with image loading<Image />
componentThere’s a lot to unpack in this next section so I’ve added comments to the code example, so let’s dig in.
const Image = ({ image }) => {// The first step is to load the image into a texture that we can use in WebGLconst texture = useLoader(TextureLoader, image.data.src)// Then we want to get the viewport from ThreeJS so we can do some calculations laterconst { viewport } = useThree()// We need to apply some corrections to the texture we've just madeuseMemo(() => {texture.generateMipmaps = falsetexture.wrapS = texture.wrapT = ClampToEdgeWrappingtexture.minFilter = LinearFiltertexture.needsUpdate = true}, [texture.generateMipmaps,texture.wrapS,texture.wrapT,texture.minFilter,texture.needsUpdate,])// Here we grab the size and position of the image from the DOMconst { width, height, top, left } = image.data.getBoundingClientRect()return (<mesh// We convert the width and height to relative viewport unitsscale={[(width / window.innerWidth) * viewport.width,(height / window.innerHeight) * viewport.height,1,]}// We convert the x and y positions to relative viewport unitsposition={[((width / window.innerWidth) * viewport.width) / 2 -viewport.width / 2 +(left / window.innerWidth) * viewport.width,0 -((height / window.innerHeight) * viewport.height) / 2 +viewport.height / 2 -(top / window.innerHeight) * viewport.height,0,]}>{/* We're use a simple plane geometry */}{/* think of it like a piece of paper as a 3d shape */}<planeBufferGeometry attach="geometry" />{/* Finally we map the texture to a material */}{/* or in other terms, put the image on the shape */}<meshBasicMaterial attach="material" map={texture} /></mesh>)}
I chose this image because it reflects how I feel about doing this sort of math.
For anyone who knows of a better way please do let me know. Otherwise for anyone who is interested in the thought process behind the maths:
You have to do the opposite of the last section to get a matching relative top position
A <meshBasicMaterial />
is a perfectly fine thing to use, but unless you’ve got some grand ideas then for the purpose of rendering images you might be better off without WebGL. The key to the fun effects you’re looking for belong in shader-land, for this we’ll be using a <shaderMaterial />
instead.
The first thing we’re going to do is create a new file:
// src/webgl/image.jsimport * as THREE from 'three'import { extend } from 'react-three-fiber'export default class Image extends THREE.ShaderMaterial {constructor() {super({uniforms: {texture: { type: 't', value: undefined },},vertexShader: `varying vec2 vUv;void main() {vUv = uv;gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );}`,fragmentShader: `varying vec2 vUv;uniform sampler2D texture;void main() {vec2 uv = vUv;vec4 texture = texture2D(texture, uv);gl_FragColor = texture;}`,})}// These let us set the texture uniform with a react propget texture() {return this.uniforms.texture.value}set texture(v) {this.uniforms.texture.value = v}}// register an element in r3f as <image />extend({ Image })
There are 3 main points to understand in this new file:
uniforms
- these are variables that we share with the shader codevertexShader
- this shader handles the positioning of the pixels being renderedfragmentShader
- this shader handles the colour of the pixels being renderedFollowing these points we’re passing a texture (an image) into our shaders. The shaders then get the positioning and colours of each pixel (the uv
value) and then draw it to the screen.
For more information on shaders I’d highly recommend The Book of Shaders. Otherwise you can go find shaders other people have written and copy/jury-rig them into place.
The next step is to then import this new file into our canvas.js
// src/webgl/canvas.jsimport React, { Suspense, useMemo } from 'react'import { Canvas, useLoader, useThree } from 'react-three-fiber'import { TextureLoader, LinearFilter, ClampToEdgeWrapping } from 'three'import { useWebGLState } from './context'import './image'
Then we switch out the <meshBasicMaterial />
with our new <image />
component!
<image attach="material" texture={texture} />
Here’s an embed of the finished code:
And there we have it. Now we’re free to start adding some cool effects!