Shaders | Difference between CPU & GPU

Shader is a code that is executed by your Graphics Processing Unit, (GPU) . If you are a programmer who is planning to write his first shader, or a designer, who feels like taking some additional control over visuals, here is one way to understand the computational power you have at your disposal:

CPU Think of your processor, Central Processing Unit - CPU as a very fast guy, who has a calculator and can do almost everything he is told to do. But he can do one single thing at a time. When you hear that a CPU has a bunch of cores it means that there are actually a bunch of guys there. But they can't really help each other, they can't do the same work. So your processor is not 2 times faster when he has two cores. Each of those guys can only do his own thing. It is more likely that one of them will handle your music player while the other will manage your browser. Now when you have a single core processor and music player and browser are working at the same time, it means that the single guy has to do two things, but it's not like he is doing them at the same time: he is spending a few milliseconds on one task and a few on the other. Like, it's probably a mess on his desk, coffee everywhere and stuff. The programmer can have control over which guy (core) is executing his program, he can split the tasks but usually, you don't really worry about that. OS will allocate a core(a guy) to handle the program you made.

Threads If you do want a couple of cores to do some processing together, you need to create so-called threads. The guys(cores) are working in their own rooms, and you better make sure one of them doesn't need to enter other guy's room to grab something from the table - processors can't grab info from each other, they can't communicate, they each work with memory, but in this analogy, if two CPUs are working with the same data, it would be like somebody came into your room, took a piece of thing you were working on and did something to it without knowing what you were planning to do with it.

When you create a thread, it's like making a function, that can take a really long time to finish, but instead of waiting for it to finish, the code continues, and that function is doing it's thing. *This is not an official definition of a thread. You may ask "What if the thread is preparing some information and at the same time I will try to use that data before thread finishes preparing it?" - the answer is "You get a total, a complete unpredictable mess.". So when you are using a program and in the corner, you have a progress bar that shows some process, it is likely that the process is handled by a thread. But a programmer made sure that there are no risks of getting a mid-process product.

GPU Now the GPU is a different story. We have a monitor, and that monitor has lots of pixels. Let's say 1920×1080 = 2.073.600 pixels. And your CPU can do one thing at a time, so there may be two million or more operations he'll need to do to paint stuff on your FULL-HD screen. Your monitor basically reads a picture from memory of your computer. And CPU can fill that picture with colors, lines, letters... And that was is basically happening right now. Even though you have a GPU, it doesn't mean that your browser and all your applications are using it - some of them do to some extent, some of them don't. CPU is good at doing one thing after another very fast, and in most scenarios, it's what we need. But when it comes to graphics, when we are painting lots of stuff to the screen, it's not really necessary to finish painting one pixel to paint the next one. Because, unless we are doing some Blur effect, we don't care what is in the pixel before this one, or the one below. So GPU is like a big factory with millions of guys, who are not as fast, and not as flexible, and not as smart. Let's say there are monkeys. A factory with millions of monkeys. It takes some time to show monkeys what to do, and they can remember only a limited number of operations. And you can't give individual assignments, there is a big LCD which will tell them all what to do. *Not an actual GPU architecture. Modern GPUs may be a bit more flexible than this description, but the analogy still holds. So a big factory of monkeys, they still don't talk to each other, like our previous guys, but each of them is given a small amount of work to do. They are slower, but there is just so many of them that the job gets done faster. In case of 3D rendering, it is what we need - a same simple set of operations to be done for each pixel, like when we take a picture and multiply it by our light color - is usually one of the things a shader does.


Usually shader is 2 functions: one calculates stuff for every vertex of a mesh, and the second function calculates for every pixel on screen where the mesh triangles are rendered, it's called a fragment function. If you want shader that adds some wind shaking, in vertex function you will change a position of a point. If you want to add fog, you will change color based on distance from camera in fragment (pixel) function. And vertex function prepares data for the fragment function, and that data is interpolated automatically: if one vertex provides white color and the second - black, pixels on the screen between those points will gradually go from white to grey to black. Using my Playtime Painter to some examples:

In this image it may seem like it's too much white, that is because I use Linear color space - a correct color space. Our eyes are better at telling difference between darker shades and less sensitive to changes in brighter colors.

This video provides a great explanation:


In GPU you have code and data, just like in CPU. In GPU the Data is usually Textures(pictures), and code - Shaders. A Material is usually a way you combine those + some other parameters. You select a shader, set which textures it will use and the parameters. Material is a way to manage Shaders and the input data they use.


A mesh is a shape, it's a collection of triangles. Mesh is actually simpler then you may imagine. A square would be 4 dots, which are represented by 4 Vectors. And triangles will be 6 indexes: 3 indexes for first triangle and 3 for the second.

And it's as simple as it sounds:

mesh.triangles = new int[] {1,2,3,4,3,2} . A small note: some Shaders draw only triangles which are facing the camera. And to know if triangle is facing the camera it checks if indexes are clockwise with respect to the camera, this is why the last 3 indexes are 4,3,2 . Indexes 4,2,3, or 2,3,4 would have made the triangle invisible from this side: (ignore white numbers 2 and 1/2, focus on the painter ones)


Different Materials can use the same Shader, only different textures. Different meshes could use the same material (for example, different walls could have different shape, but still use the same shader and texture). Objects are usually a combination of a Material and a Mesh ... and usually all the other stuff (colliders, physics etc.).


CPU can do one thing after another very fast, GPU can complete same task many times, for different segments of your screen. It's better used for smaller tasks and it is slow at changing the task itself. And CPU is used to give GPU commands on what to do. In application for 3D development you will usually not worry about this technical part. In Unity you place objects in the Scene, decide which mesh and Material they will use, apply shader and texture to the materials, choose where camera is positioned, and at the end of each frame, all triangles and draw calls will be send to the GPU. So every frame almost everything is redrawn from scratch.

Facebook icon
Instagram icon
GitHub icon
Discord icon