Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Creating a 3D Grid of points in Blender 3.0

In my spare time, one of the things I like to play with is rendering stuff in Blender. While I'm very much a beginner and not learning Blender professionally, it is a lot of fun to play around it!

Recently, Blender has added geometry nodes (which I alluded to in a previous post), which are an extremely powerful way of describing and creating geometry using a node-based system.

While playing around with this feature, I wanted create a 3D grid of points to instance an object onto. When I discovered that this wasn't really possible, I set to work creating my own node group to do the job, and I thought I'd quickly share it here.

First, here's a render I threw together demonstrating what you can do with this technique:

(Above: Coloured spheres surrounded by sparkles)

The above is actually just the default cube, just with a geometry shader applied!

The core of the technique is a node group I call Grid3D. By instancing a grid at a 90° angle on another grid, we can create a grid of points:

(Above: The Grid3D node group)

The complicated bit at the beginning is me breaking out the parameters in a way that makes it easier to understand on the outside of the node - abstracting a lot of the head scratching away!

Since instancing objects onto the grid is by far my most common use-case, I wrapped the Grid3D node group in a second node group called Grid3D Instance:

This node group transfers all the parameters of the inner Grid3D node group, but also adds a new position randomness vector parameter that controls by how much each instance is translated (since I couldn't find a way to translate the points directly - only instances on those points) on all 3 axes.

(Above: instanced cubes growing and shrinking)

Now that Blender 3.1 has just come out, I'm excited to see what more can be done with the new volumetric point cloud functions in geometry nodes - which may (or may not, I have yet to check it out) obsolete this method. Still, I wanted to post about it anyway for my own future reference.

Another new feature of Blender 3.1 is that node groups can now be marked as assets, so here's a sample blender file you can put in your assets folder that contains my Grid3D and Grid3D Instance node groups:

A review of graph / node based logic declaration through Blender

Recently, Blender started their Everything Nodes project. The first output of this project is their fantastic geometry nodes system (debuted in Blender 2.9, and still under development), which allows the geometry of an mesh (and the materials it uses) to be dynamically modified to apply procedural effects - or even declare a new geometry altogether!

I've been playing around with and learning Blender a bit recently for fun, and as soon as I saw the new geometry nodes system in Blender I knew it would enable to powerful new techniques to be applied. In this post, I want to talk more generally about node / graph-based logic declaration, and why it can sometimes make a complex concept like modifying geometry much easier to understand and work with efficiently.

Blender's geometry nodes at work.

(Above: Blender's geometry nodes at work.)

Manipulating 3d geometry carries more inherent complexity than it's 2d counterpart - programs such as Inkscape and GIMP have that pretty much sorted. To this end, Blender supplies a number of tools for editing 3d geometry, like edit mode and a sculpting system. These are powerful in their own right, but what if we want to do some procedural generation? Suddenly these feel far from the right tools for the job.

One solution here is to provide an API reference and allow scripts to be written to manipulate geometry. While blender does this already, it's not only inaccessible to those who aren't proficient programmers but large APIs often come with a steep learning curve (and higher cognitive load) - and it can often often be a challenge to "think in 3d" while programming (I know when I was doing the 3d graphics module at University this took some getting used to!).

In a sense, node based programming systems feel a bit like a functional programming style. Their strength is composability, in that you can quickly throw together a bunch of different functions (or nodes in this case) to get the desired effect. This reduces cognitive load (especially when there's an instantly updating preview available) as I mentioned earlier - which also has the side effect of reducing the barrier to entry.

Blender's implementation

There's a lot to like about Blender's implementation of a node-based editor. The visual cues for both the nodes themselves and the sockets great. Nodes are colour coded to group them by related functionality, and sockets are coloured according to data type. I would be slightly wary of issues with colourblind users though - while it looks like this has been discussed already, it doesn't seem like an easy solution has been implemented yet.

This minor issue aside, in Blender's new geometry nodes feature they have also made use of shape for the sockets to distinguish between single values and values that can change for each instance - which feels intuitive to understand.

When implementing a UI like this - as in API design - the design of the user interface needs to be carefully considered and polished. This is the case for Blender's implementation - and this only became apparent when I tried Material Maker's node implementation. While Material Maker is cool, I encountered a few minor issues which made the UI feel "clunky" when compared to Blender's implementation. For example:

  • Blender automatically wraps your cursor around the screen when you're scrubbing a value
  • Material Maker's preview didn't stack correctly underneath thee node graph, leading to visual artefacts


Blender's implementation of a node-based editor isn't all perfect though. Now that I've used it a while, I've observed a few frustrations I (and I assume others) have had - starting with the names of nodes. When you're first starting out, it can be a challenge to guess the name of the node you want.

For example, the switch node functions like an if statement, but I didn't immediately think of calling it a switch node - so I had to do a web search to discover this. To remedy this issue, each node could have a number of hidden alias names that are also searched, or perhaps each node has a short description in the selection menu that is also searched.

Another related issue is that nodes don't always do what you expect them to, or you're completely baffled as to what their purpose is in the first place. This is where great documentation is essential. Blender has documentation on every node in all their node editors (shader, compositor, and now geometry), but they don't always give examples as to how each node could be used. It would also be nice to see a short tooltip when I hover over a node's header explaining what it does.

In the same vein, it's also important to ensure a measure of consistency if you have multiple node editors. While this is mostly the case with Blender, I have noticed that a few nodes have different names across the compositing, shading, and geometry nodes workspaces (the switch node), and some straight up don't exist in other workspaces (the curve nodes). This can be the source of both confusion and frustration.


In conclusion, node-based editors are cool, and a good way to present a complex set of options in an easy to understand interface. While we've looked at Blender's implementation of a node-based editor, others do exist such as Material Maker.

Node-based interfaces have limitless possibilities - for example the Web Audio API is graph-based already, so I can only imagine how neat a graphical node-based audio editor could be - or indeed other ideas I've had including a node-based SVG generator (which I probably won't get around to experimenting with for a while).

As a final thought, a node-based flowchart could potentially be a good first introduction to logic and programming. For example, something a bit like Scratch or some other robotics control project - I'm sure something like this exists already.

If you know of a cool node-based interface, do leave a comment below.

Further reading

Art by Mythdael