Notifications
Clear all

[Closed] Predict next vertex 3d position based on last two positions

No, I’m not trying to alter the existing mesh at all…it will remain as is. No morphing.
I’m trying to pick a specific point on a mesh, and track it throughout the meshes recorded – existing animation.
I’ll later use that tracking info to attach another item to. That’s my ultimate goal.
I’m already achieving that goal manually, by using the existing vertex for as long as the mesh doesn’t change it’s vertex assignments… and when it does, I manually have to pick the new vertex that represents that spot on the mesh I want to track, and use that until the mesh changes again.
I was just hoping to automate the process a bit more.
I’m beginning to think that even if I do come up with something, it won’t be accurate enough. The characteristics of the human movement (mesh) are probably to chaotic to be predictive.

Thanks anyway…I appreciate the effort!

I’m confident that predicting the position of a specific vertex from the original mesh in the new sequenced mesh is impossible. It would require knowledge of the triangulation algorithm and manual calculations, which is not feasible.

If your character has textures, I recommend trying to snap to a point on the texture rather than the geometry. This approach would provide a computable 3D position that can be unambiguously calculated for all animation phases (sequenced meshes).

I’m sort of getting to that same conclusion myself!
I knew it wouldn’t (or couldn’t) be exact, but I was hoping to get close.
I’ve thought about looking at the texture method as well, but that’s way out of my league (which isn’t saying much).
I’m afraid that would run into similar issues in that the texture image also changes every time the model/vertex count does,. A new single texture map is created, and is used until the count changes again.

What are you working on, a game or a film (movie, cinematic)?

Nothing that high end!
Just trying to make a script that will help streamline a process for a couple of vfx guys for their proprietary process.
It’s closer to game development than anything, but very specific in it’s needs.

Originally, I was really hoping that with 3 or 4 known x,y,z positions, and maybe figuring out the trend in velocity, it might be possible to predict the next one…but the deeper I get into it, the harder it looks !

As I stated up front, I’m a novice when it come to scripting, and sadly, my math skills are decades, which is a big understatement … old!

Again…thanks!

does each mesh have exactly the same number of verts ?

does the mesh have uv’s ?

No…both verts and polys change from time to time, depending on the complexity needed during the live capture of the subject.
Picture a man standing, then crossing his arms across his body…less polys needed where the arms combine together and across the chest area.

Yes it has uv’s.

does each model have a unique texture or is it one for all models ?

1 Reply
(@denist)
Joined: 10 months ago

Posts: 0

If a new texture is used for each mesh, I don’t see any way to track a given point on the original mesh.

The only option left is to somehow get attached to a “geometrically-special” point (like a “button”) or a “color-special” point (“bright spot”), and try to track it out.

Sorry for late reply…internet was out .
Each character has a single ‘texture’ imported that is an .ifl file…so a series of textures.
In the case I’m working on, the character animation length is 180 frames, and the mesh changes 28 times in that duration. Everytime the mesh changes poly or vertex count, so does the overall layout of the texture map. During the times the poly count doesn’t change, the layout of the tex map doesn’t change, but you can see the individual sections of it updating (facial expressions, movement of cloth wrinkles, etc.). So every image in the ifl file is different.

If you’d post examples of what you’re working with, we could try to work something out and help somehow.
So, think of ways to get around the NDA.

Page 2 / 3