Notifications
Clear all

[Closed] Align object to a vertex of another one?

Hi! I’m attaching one point to a vertex on a (editable) mesh via Attachment Constraint. I want the point to have the orientation of the vertex where it’s constrained, not the face, so I’m uncheking the ‘Align to Surface’ option. Using the following function…

[left]getNormal <mesh> <vert_index_integer>

Returns the normal at the indexed vertex’s position as a point3. The normal is based on the faces that use the vertex and the smoothing groups assigned to those faces. [/left]

[left]
… for getting the normal of the vertex, is it possible to use this data for aligning the point to the vertex (and keep the orientation using a Rotation Script Controller)? If the asnwer is ‘yes’, could you point me how?

The reason for this is that I’m developing a ‘Sticky Points’ script (there are a couple of threads related to that) that actually works, but for creating some tools around that (like mirroring points, for example) I think I need to have the orientations under control.

Every advice or suggestion would be trully apreciated!!!
[/left]

3 Replies

Hi Iker,

There are several methods to convert a vector (the vertex normal in this case) to rotation value, like create the quaternion value from a vector that is perpendicular to the normal and an angle between that vector and the x-Axis, there are also methods to convert a vector to a matrix3 value and then extract the rotation from there, the maxscript function to do that is matrixFromNormal but you don’t have too much control with it (like defining axis).

In this case i would preffer to build a transform matrix and include the vertex position too (to have everything there in a transform script controller).

The matrix3 values are a set of four vectors (rows in the matrix), the first three vectors contain the direction of node’s each axis (x,y,z) (contain the rotation) , and the last vector contains the position. The three first vectors contain the scale values too, the local scale values are the length of those vectors (1 being 100%), if the three first vectors aren’t perpendicular between them the matrix wont be orthogonilized, that means that the node will be skewed.

Having in mind this we can use the vertex normal vector as the x-axis (row1 in the TM), a perpendicular vector to the normal vector as the y-axis (row2 in the TM), to find a perpendicular vector we need to use two vectors in a cross product, here we can use the normal vector and the z world axis ([0,0,1]). And finally, to find the z-axis we can use a perpendicular vector between the x-axis and the y-axis.

in the transform script controller:

pos=(getVert nodeMesh.mesh vertNum)*nodeMesh.objectTransform
   row1=(getNormal nodeMesh.mesh vertNum)*nodeMesh.objectTransform.rotation
   row2=normalize (cross row1 [0,0,-1])
   row3=normalize (cross row1 row2)
   matrix3 row1 row2 row3 pos

In the script i multiply the vertex position by its node TM because the getVert function ignores the nodes transformation. I use objectTransform instead of transform, to include object transform offsets made by the “transform object only” and “transform pivot only” functions. to get the normal in the mesh’s coordsys i multiply it only by its rotation.
To get a non-scaled transform i normalize the first three rows but the row1 because the vertex normal is already normalized. And i use a negative z-axis because the cross product works with the right-hand rule but in max the node’s transformation are in the left-hand rule.

I attached an example made in max 2008, hope it can be useful :D.

another thing to consider is that the script controller doesn’t update in real time with changes made in its geometry (i’m not totally sure about this but that’s happening in my max 2008 version) only in its transform controllers. The way that i found to fix it is using the “dependsOn” variable.

see ya!

Thanks again, Phoelix. The main problem I was having was related to the orientation of the created points in the face (working on my script for attaching points to geometry): symmetrical positions did not have the same orientation, or even a mirrored one. So I decided to go for a ‘mirror points’ solution, if the mesh is symmetrical. That way, I guess, mirroring information is going to be easier, since both points in each side have the same orientation.

I’ll take a look at your scene… and thanks again for the valuable information! I’m diving into transformations and matrices, and you’re one of the guys that supply me oxygen bottles!

well, i’ve heard that use mirrored nodes is not good at all, because it’s using negative scales, i’m not sure about this, maybe it could cause problems in the future. But anyway it can also be done in that way if you want to, an easy way is to change the y-axis to a negative y-axis (matrix3 row1 -row2 row3 pos). If i didn’t understand well your question let me know.

About the method that i mentioned before, i’ve noticed that normals vectors are not always normalized, so the row1 needs to be normalized too, and if the helper has a parent it need its TM in local mode, so the result TM needs to be multiplied by its parent TM.