Three.js, at it's core, is a library that allows you to add and describe data in 3 dimensions, eg, as meshes and lights, and then convert that data into a 2d representation onto a HTML canvas.
Before we can do anything with Three.js, we need 3 things,
In this video, in order to understand these things better, we discuss and experiment with some of the options we have with each of them.
Scenes allow you to set up what is to be rendered by three.js and where it is in 3D coordinates.
We need a scene, The scene is also sometimes referred to as the scene graph.
We can have multiple scenes.
When you have a scene, you can then add objects and lighting.
The Renderer displays the scene onto a HTML Canvas Element. By default it uses WebGL.
WebGL allows GPU-accelerated image processing and effects as the renderer creates the 2D image for the Canvas.
There are many types of Cameras in Threejs. In this video we experiment with the Perspective and the Orthographic cameras.
The camera properties describe a Frustum which are the dimensions inside the scene that will be rendered.
The Perspective projection is designed to mimic the way the human eye sees. It is a very common projection mode used when rendering 3D scenes.
It is important to note that the OrthographicCamera constructor expects the values to be in the 3D world units, not pixel units. When you create the PerspectiveCamera, the width and height ratio are based on the canvas pixel dimensions, whereas, if you did this for the OrthographicCamera, you would have very different results depending on screen size and whether your client was a desktop or mobile phone. Instead, you pass coordinates in fixed world units such like a hollow cube. eg, top, bottom, left, right all being 10 for example, means that anything within -5 to 5 on the X axis, and -5 to 5 on the Y axis would be in view, provided your camera target was default at (0,0,0) and looking straight down the Z axis. Your near frame and far frame would decide the visible Z axis range.