both…
actually Hair and Fur modifier and other similar scattering systems don’t randomize position onto existent surface. All those algorithms are based on re-triangulation of the surface to make a new one to match the number of distributed objects. After that the algorithm just puts an object in vert’s position or in center of face (+ slight random deviation from this position). If you have enough memory (using c++ and SDK) you can re-triangulate any surface pretty fast.
First of all, Hair And Fur does NOT create thousands of objects. It creates the guides in the viewport within in the current object’s stack, so it doesn’t even make one new scene node.
Creating many objects is slow. Even if you look how long it takes Hair And Fur to create ONE spline object with all the hairs when you use the convert to spline option, you will see it takes a bit of time.
Second, Hair And Fur does NOT care about intersections during distribution. As we saw earlier, my first script needed 0.034 seconds to create 1,000 points on a 20×20 plane and 0.618 seconds to create 10,000. Keeping in mind that SDK plugins are several orders of magnitude faster than MAXScript, it is no wonder Hair and Fur can generate several thousand guides in nearly realtime.
Max is generally challenged with more than 10,000 single objects in the scene, no matter what the source. That’s why both Hair and PFlow generate ONE object to represent the hair distribution / particle system. PFlow has the option to generate several meshes at render time to avoid memory problems over 5 million polygons, but none of these systems creates the amount of objects you are attempting. One million objects is OUT OF THE QUESTION even on my 16GB system in the office – it is asking for trouble.
The same applies to all Compound objects in Max that do things like Scattering etc. VRay and finalRender can handle millions of instances at render time but Max itself cannot.
So if your objects do not have to be individual for some reason, you could combine them into few larger meshes to speed things up. 50M polygons in a few objects are no problem for Max on a good system, so if you have 1 million objects with 50 polygons each but combined into one or a few objects, that will let you run a lot faster. Of course, the process of the generation will still be somewhat slow because attaching meshes also hits the memory hard (be sure to follow the recommendations in the MAXScript Help and disable the Undo while doing that), but once you are done, you should be able to move around the scene quite fast.
That being said, I feel that the more help I provide, the more you demand, so I will stop here. If this approach does not work for you, you will have to find another approach. Finding workarounds is part of life with Max, so I wish you luck with whatever you are trying to accomplish.
Note that I had to optimize the array growing by pre-initializing the size to the desired number and assign by index instead of just appending to it. When I was appending 1 million times, growing the array in memory via append() caused severe slowdowns due to the HeapSize being increased automatically.
really? Thank for the this, that’s very useful info
Martin
Thankfully it has been in the Help for quite a few years
Check out the topic “Pre-initialize arrays when final size is known” which is part of the “How to make it faster?” set of topics.
The same is true for Strings – concatenating strings is a bad idea if they are long because each time you add to a string, a copy is made in memory. It is suggested to format or print into a stringstream instead. (see “Use StringStream to build large strings” topic).
Hey thank bobo,
must have missed this part
you made a nice work on the help file by the way, it’s very well done and easy to use!
regard,
Martin
I was aware that we could initialize the array this way but I didn’t know that it made thing faster,
but a question I ask myself is, if let say that the heapsize is large enough and that the append would not cause any automatic increase …does I still gain in speed by pre-initializing ?
It looks like with a large enough heap, the indexed access is a bit faster. Filling an array with 10M random integers, the preinit/indexed access approach took 8570 ms vs 9768 ms for the append() method.
Thank, that would be nice to know because it a call to append make thing slower then I would just pre-Initialize everything…but I suppose it a good scripting practice since the user may have other script eating the heap
Martin Dufour
Sometimes we don’t know how large the array will be. And how often do you collect millions of values? The difference in speed is small, and in fact for 1M iterations, the append() appeared to be slightly faster (772 ms vs 823 ms), but then getting slower at 10M…
This might be just an academic conversation
hehe thank , that was fast , I’m kind of slow when writing English, I’m French by the way
so the append slow thing down but another question maybe, if I have a 100 element array and I need to grow it, do you think there’s an alternative solution that would be faster than append ?
Martin
Growing a 100 element array should be very fast, I don’t think you can even time it. Even with a million elements, both approaches are faster than a second on my machine. I just make sure I have at least 64MB (or better 128) after a fresh Max install.