Houdini points to volume

This operator is used to generate a regular set of points that fill a given volume. This can be useful for generating a field of particlesor initializing a particle fluid. The type of incoming geometry. In auto-detect, if the input is a single volume primitive, the Fog or SDF method will be used depending whether the volume primitive has the SDF flag set.

In Fog mode, the first volume of the input is treated as a fog volume. Voxels with a 1 value will have points, those with 0 will not have points. In SDF mode, the first volume of the input is treated as a signed distance field.

Voxels with negative values will have points, those with positive values will not. Creates points inside the entire bounding box of the input, then removes those outside the volume. This method is efficient for inputs that are close to axis-aligned boxes, but can be slow and memory inefficient for more sparse configurations.

This method also provides backwards compatibility for files created prior to Houdini Creates points only in the active voxels of the input volume if they also lie inside the bounding volume.

The joy of xyzdist() and primuv()

This method can handle very sparse configurations where the input objects are distributed across space and do not fit well into an axis-aligned bounding box. Because a sparse volume is required, this method creates an OpenVDB volume of the input internally, and the Convert To Fog option will always be applied.

Inverts the sense of which points will be kept. Because the space is seeded with a bounding box of points outside of the object, this often results in a surrounding cube of points unless the border condition of the volume is altered. A loosely packed configuration that places the points at the vertices of a regular three-dimensional grid. A tightly packed configuration placing points at equal distance from each of the three other closest points.


The smallest distance between any two of the generated points in the initial configuration. Increasing this value will generate fewer total points, but will be faster to process. The SDF value that is considered the outer surface of the volume. This parameter is only enabled for the Sparse Volume construction method. If enabled, the SDF value that is considered the inner surface of the volume, allowing point creation within only a slice of the input volume. The amount of jitter to apply to the positional values of the points.

Jitter causes random changes to the positions of the points. The input geometry will first be converted into a fog volume using the giving point separation.

This consumes more memory, but can greatly reduce the total time as the inside test can be performed very quickly on a volume.

The points generated will be centered to this offset of the origin. A value of 0,0,0 means that the origin would be included in the generated point set. The scale attribute is this multiple of the particle separation. Having the particles larger than the separation ensures no particles are lost in the gaps between voxels. If a uniform lattice of points is being built, then the surface layer will exhibit terracing as points cross the boundary. This dithers the points, comparing the distance to the cut-off threshold with a random number to see if the point should be kept.

This causes points to be kept outside of the threshold, as it expands half a grid scale in both directions. Turning this on will result in a more randomized surface layer. Often an object has one face that is free, and the other faces are constrained by collisions. The geometric normal of the SDF built from the surface is used, not any normal attribute on the incoming geometry.The pyro solver isn't too scary once you understand it, and gets amazing results. I found I got a better understanding by building a simple smoke sim first, and gradually adding complexity.

There's a few things to do before getting into dops, the first of which is to create a volume. I should point out that this is all relatively old, back in the H13 or H14 days. The current methods are still pretty similar or at least until H18 with its new fangled pyro-in-sops stuffbut you should still be able to follow along with these to get the basic principles.

If you jump over to the HoudiniDops page there's some more up to date stuff with H There's many ways to do this, the simplest is to start with a polygon shape, append a 'vdb from polygons' sop, and set the mode to 'density'. The geometry spreadsheet now shows a single 'point', the vdb volume. If you've been using sops for a while, you might expect to see all the voxels individually listed, in the way you see all the points for a polygon object. Don't be alarmed, this is normal. To increase quality, lower the voxel size parameter.

Lower values means smaller voxels, which means more detail. The viewport will render shadows within the volume if you go into a lit mode, and add simple lights.

This can help with reading the shape of a volume, at the expense of performance. Keep in mind that houdini lights default to physically correct falloff now, so often you need to boost their exposure by a few stops when you translate the light away from the volume.

For a smoke solver to do anything interesting, we need to tell it how to move. Again, many ways to do this, for now we'll re-use the 'vdb from polygons' sop. Now move to the 'vdb from polygons'. There's a surface attributes multi-parm. Click the plus button, and it creates a new set of parameters.

This allows you to read other point attributes from your shape, and create a new vdb volume from it. Use the combo box to choose point. The only change you'll see is that an extra row appeared in the geometry spreadsheet. This means we now have 2 vdb volumes, density and vel. Density is a float, ie, each voxel stores a single value to say how dense it is, while vel is a vector, where each voxel stores an x,y,z component.

By default Houdini will only display the density field. To view the vel field, you can append a volume visualisation sop, and set the diffuse field to vel you might have to adjust the density sliders to see it properly.

The most obvious thing we'll need is a smoke object. This will store data during the simulation. To run the simulation itself, we'll use a smoke solver. Connect the smoke object to the first input of the solver, and the solver to the output dop. This is now a working smoke sim, but not a very interesting one; if you play the timeline, nothing happens.

What we need now is to import the vdb volumes into this sim. Unlike pop networks, this isn't done automatically, so lets do that now.

Point VOP in Houdini - Quick Tip

Create a 'source volume' dop, and connect it to the last input of the solver named 'sourcing' if you hover over it. Set the 'volume path' parameter to.

Note the backticks, and let the autocomplete help you type the function name, its hard to type quickly! If you middle click on the volume path label, you'll see that it toggles to the path of the vdb.

Let the scene play, you should get smoke. Why use opinputpath?And of course it would also be too tedious for a lazy nerd like me to manually sculpt dents.

I was already trying to figure out a procedural approach for this dented look. VDBs are a really fast representation of voxels. Voxels are for 3D the same as what pixels are for a 2D image. Think of them as tiny cubes. Each voxel can store a Value. In our case the distance to a surface. If a voxel is on the outside, it will store a positive distance towards the surface.

If a voxel is on the inside, it will store the negsative distance towards the surface. This is called an SDF, a signed distance field. That was so straightforward. You guys are killing it! Thank you so much for doing these tutorials. This is going to help a lot of people myself included.

The fact you explain the technique with a visualisation process say in illustrator is really super handy, at least for me and my small brain ;o. I have just one question, how you have done the render? I converted the VDB to a polygonal mesh, scattered points on it and connected them, then converted the resulting splines into polywires.

Exported via alembic and rendered in C4D using Octane. Hope that helps, Cheers, Moritz. This is so awesome, super glad I found about you guys. I tried to avoid Houdini as much as possible because it looked so complicated compared to other 3D packages. Your explanation is really top notch and super easy to follow.At its most basic, xyzdist will return the distance from the sample point pt to the nearest point on the surface geometry.


So if we feed this function an integer and a vector, in addition to the distance to the surface, it will also give us the primitive number prim and the parametric UVs on that primitive uv. Note that parametric UVs are not the same as regular UVsā€¦ this just means the normalized position relative to the individual primitive we found.

The easiest example to start with is the rivet. We just want to stick a point onto an object, and have it follow the object around.

The initial setup. The pig itself is being merged in from another network. Connect your point and your template geometry to a Point Wrangle, and start with this code:. Now what? Given a geometry, a primitive number, and a parametric UV coordinate, primuv will tell you whatever the hell you want to know about the geometry at that point.

houdini points to volume

It will even interpolate attributes for you between points such as point colors, or position. Add this code to your Wrangle:.

You should see your dot stick to the nearest point on surface! You only want to do that computation once. With deforming geometry, you could use Time Shift to pick your reference frame.

With just a pig head being transformed at object level, we can use some options on our Object Merge to accomplish something similar. Connecting the second Wrangle and Object Merge.

houdini points to volume

This means we have a static reference point for finding the nearest primitive and point. Your code should look like this:. If you run the Scatter on static geometry in this case with a Time Shiftwe can use Attribute Interpolate to stick those points onto the moving geometry, via the same two values we can get from xyzdist.

Remember to make sure that the attributes on the Scatter or any other node generating the points and the Attribute Interpolate are the same name. Here we are actually going to use some actual old-fashioned UVs, as well as our fancy new functions, to achieve the effect. Using Attribute Interpolate. Note that the output attributes on the Scatter SOP need to be enabled for this to work.

We want the droplets to move down the surface of the can, maybe wiggling back and forth a tiny bit the way that droplets do due to surface tension. We could solve this particle simulation on a cylinder, and use sticky attributes or ray projection to figure out ways to wiggle the particles around and still keep them attached to the surface.

Or, we could just solve the problem in an easier space. UVs are nice and simple and flat! We want to make sure these are Point UVs, so we can manipulate points with them later. Once we have a projection we like, we can move the points into UV space with the following Point Wrangle:.

Now you should see the points moved to their UV coordinates, but in real space. We can solve the particle system in this space very easily.

Particles will randomly stop on their way down, similar to how droplets sometimes hit a rivulet or another sticky part of the contact surface and slow down. Finally, particles that make it to the bottom of the can are destroyed. Append an Attribute Wrangle and connect the flattened geometry to the second input.

Your code should look something like this:.However, this is true for many applications that have the ability to script effects or manipulate geometry, etc. In most cases you need to dive in and get your hands a bit dirty. But like many things in life, we all need to start somewhere. Luckily for us we have talented Houdini artist Jonathan Granskog JonathanGranskg to give us some examples on how we can make use of these nodes that can leap a tall building in a single bound 1.

So what do you do when you have a complex algorithm that you want to use to create geometry or maybe you want to procedurally fill holes in meshes? There is no removevertex function because removeprim and removepoint will delete their connected vertices behind the scenes. One very important thing to keep in mind is that all the geometry is created after the node has run through all of your geometry components.

Notice the order of how the geometry is created. First we create the points of the corners of the triangle and afterwards we create a primitive that has no connection to the points so far.

The order in which you add the points to the primitive, via the vertices, is important because it determines the normal direction of the primitive.

And if you want to copy a point, including its attribute values and groups, you can use a point number instead of a vector value when calling addpoint. One small thing to notice is that I also have the Attribute Wrangle node set to run over Detail only once meaning that it will like it says only run once. This is a very simple way to create geometry from scratch in VEX and I like to think of it as being similar to something like Processing in a way where you can create everything with code.

Removing points or primitives works exactly in the same way. With the removeprim function you can choose if you want to delete all the connected points as well.

I fractured a box and using a Foreach node centered each of them at the origin. Then I could use atoi to convert the string to an integer inside my Attribute Wrangle and compare it to the current frame to only keep one piece. Then you can cache these out and copy them to particles and stamp a random frame onto a Timeshift to copy random pieces to your particles.

All it does is check whether nearby points are within a certain distance from the current point and if there are any points close enough it will create a polyline between the two points. Just look for the corresponding nodes, they have exactly the same names as the functions.

Grab Houdini scene file here: Houdini Scene File. When not designing or animating pixels, I wrangle some code. Skip to content. Take it away Jonathan! Geometry Creation So what do you do when you have a complex algorithm that you want to use to create geometry or maybe you want to procedurally fill holes in meshes?

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

Ok No Privacy policy.Confused me slightly at first, so worth mentioning; A volume is a primitive, the same way that a point is a primitive, or a primitive sphere is a primitive. This means that unlike a polygon mesh for example, there's no drilling down to its sub-components; you can't open the geometry spreadsheet to see values in the individual voxels. There are other tools to help you see whats going on within a volume volume slice sop, volume visualisation sop, volume trails sop etcbut ultimately, if you merge a xx voxel volume with a primitive sphere, if you inspect the merge it will show that you have 2 primitives; a volume and a sphere.

Houdini supports 2 volume types, its own volume format, and VDB. These are treated as primitives like polys or nurbs, if you middle click and hold on a node, you'll see it say '1 VDB' or '1 Volume' or similar. The most simple way to describe it is as Alembic, for volumes. It's an open source, standardised format so you could export a VDB from Houdini, load it into maya, and render in Vray.

It has several companies helping drive it Dreamworks, Double Negative, SideFx being the most notableand is generally a good thing. It's more than Alembic though, in that Alembic is mainly a file format, and a handful of example tools to manipulate and view Alembic files.

The VDB toolkit is both a way to store a volume, and a large suite of algorithms to manipulate volumes. One of VDB's most interesting qualities is that it doesn't waste storage space on empty voxels. VDB's can be substantially smaller on disk compared to other formats. The name VDB as stated in the original paper comes from " A fantastic walk-through of VDB features is also available from the openvdb website. It's a single houdini file that is like an interactive book on VDB; its broken into chapters, lots of sticky notes, I'm surprised I only found this recently.

Many thanks to the person on odforce who pointed it out! Houdini's own volume format had been around for a while before VDB arrived, so there's a large amount of nodes in there for working with them you can see this in sops by typing 'volume' in the tab menu.

There's a. If you middle click on the node, you'll see the Volume primitive has been replaced with VDB. You can now cache this out to. What's nice is that a lot of the core Houdini volume tools have been updated to work with VDB primitives too. This means its pretty safe to use VDB at all times, as you can easily convert a VDB temporarily to a native volume if required, and then convert back again.

A volume used to represent a cloud for example only needs to store a single value per voxel, density.Anatomy of Houdini smoke solver - Houdini.

houdini points to volume

Smoke render, quality. First things first: You didn't even have a shader applied to the smoke. In that case mantra is simply reading density values and outputting them straight out, hence the images you got.

You need a shader pyro shader to begin withyou need proper light setup attenuation, shadows, intensity, color, etc and you can also make use of a good volume filter I personally like gaussian.

Lastly you must take some time to setup the mantra ROP, rendering from the preview might be fine for checking things out, but a little optimization will go a long way to getting the images you want. I didn't spend 2 minutes tweaking your file and it already looks different arguably better than your original post.

Nevertheless, it shows you can certainly get better results than that. And, please, upload your files to the forum, specially when they are this small. Cheers PS: As per Jeff's rants, pyroclastic cloud is an misnomer Streak artifact in billowy smoke. What you are witnessing is with your lingering volume being advected is the various filters working away creating patterns in your low resolution density volume. To compound the issue, you are constructing the initial density volume with an Iso Offset SOP at a very low resolution, lower than the base grid size of your smoke simulation.

By simply bypassing that Iso Offset SOP feeding in to the Fluid Source SOP, you at least give yourself a fighting chance as you decrease the voxel size in your simulation, your initial density volume will at least keep pace.

houdini points to volume

The way you have it now, not good. Let the Fluid Source SOP with it's channel reference in to the Smoke Object's voxel size created by the Shelf Tools drive things until you know what you are doing and decouple them for more advanced work. Custom mask for turbulence in pyro. Get rid of mushroom effect in explosion and add details in the opening - Page 2 - Effects - od.

Temperature Field and Divergence Field and what to do about it Combed straight velocities lead to mushroom puffs. Large directional forces lead to combed straight velocities.

The pressure wave leading the divergence field leads to combed straight velocities. So what to do? Looking at Temperature first, it is directly used with Gas Buoyancy to drive the intensity whereby the upward direction is multiplied by temperature and then added to vel. Temperature is also used by some of the shaping tools to inject noise or trigger confinement within the simulation, amongst other fields.

That's fine and manageable in most cases as the velocities aren't that large, especially in smoke simulations where the temperature is driven by the sources. What is Divergence? For Explosions, there is feint hope. Force field repel. Here is a two minute set up showing the principal that I would use to make such a shot work. I am using an animated sphere with a noise pattern to break up the collision geo for the fire in Dops. I feed it into dops not as an RBD way to slow with animated geo but as a collision volume very fast, at least here.

Also a lot of the effects that you posted here everyone besides the fire shot don't look like full sims to me, but rather like shaders Spheres with displacement maps and so on. Simulating fine smoke.