[Closed] Maxscript takes lots of time to complete
I have been working on some scripts that process 5000-13000 meshes per file.
I am struggling with the amount of time it takes to do anything. I created a test scene with 7000 boxes to run my tests on. The results are far different than my actual scene.
I have adopted ivanisavich array method which is working great!
http://forums.cgsociety.org/showthread.php?f=98&t=922140
I notice that each time I run mt routine, the amount of time it takes to process the routine increases. Not by much, but enough to notice.
What eventually happens is that max will seemingly freeze up on me and I will end the task after a few minutes. I have increased my heap allocation to 128 Mbytes but that doesn’t seem to help.
What my script does is it
[ul]
adds the objects to an array
adds the objects Bounding Box to another array
creates a new array of unique names -> http://bit.ly/qotC70 (is there a better way?)
[/ul]
Then, I send the arrays to various functions that
[ul]
renames the objects based on it’s name and bounding box
adds UVW Map
adds material
[/ul]
I notice that everything is OK till the cursor turns to a spinning icon. Once that happens it’s a 50/50 chance it will finish.
If anyone has some memory management techniques I am all ears. Thanks!
i see 2.5 operations those can cause freezing the system
- making unique name array
- renaming based on…
- applying a specific material (that is possible but very unlikely)
all other operations shouldn’t take any time.
can you show a sample name array that you want to make unique?
just show 5-10 first members…
for big arrays (1000+) it might be 200+ times faster:
fn _mxs_uniqueArrays theArray =
(
fn compareSubArrays first second =
(
result = true --init. return value to true
if first.count != second.count then --if the count of the two subarrays is different,
result = false--return false
else --otherwise
for i = 1 to first.count do --go through all elements in the arrays
if first[i] != second[i] do result = false --and see if two elements are different
result --return the result - true if identical, false if not
)
for i= 1 to theArray.count do --go through all elements of the main array
for j = theArray.count to i+1 by -1 do --go backwards from the last to the current+1
if compareSubArrays theArray[i] theArray[j] do
deleteItem theArray j --if identical, delete the one with the higher index
theArray
)
fn _dts_uniqueArrays theArray = -- by denisT
(
hashes = #()
for a in theArray where (finditem hashes (h = gethashvalue a 0)) == 0 collect (append hashes h; a)
)
(
seed 1
theArray = for k=1 to 20000 collect (for k=1 to 4 collect (random 1 4))
gc()
t1 = timestamp()
new = _mxs_uniqueArrays theArray
format "MXS HELP >> after:% %
" new.count new
format " time:%
" (timestamp() - t1)
seed 1
theArray = for k=1 to 20000 collect (for k=1 to 4 collect (random 1 4))
gc()
t1 = timestamp()
new = _dts_uniqueArrays theArray
format "BY DENIST >> after:% %
" new.count new
format " time:%
" (timestamp() - t1)
)
Denis, thank you for your reply. I was at the airport when I got your reply and was out of pocket till yesterday. So, I have not had a chance to test your method.
There was a couple things on this one.
First, I was processing the arrays wrong which was causing loooong wait times often leading to me having to end the max process. I fixed the portions of the script so that it no longer hangs. The make unique method I am using doesn’t take a lot of time actually – maybe 3 seconds on 7000 objects. However, I am still wrestling with some wait times.
For example this little gem took 76 seconds to perform on 10,000 objects…
(
tstart = timestamp()
thearr = #()
for o in objects do append thearr o
j = 1
count = thearr.count
undo off
(
while count > 1 do
(
o = thearr[j]
theName = o.name
if superclassof o == geometryclass then
(
c = snapshot o
c.name = theName
c.parent = undefined
-- if (chk_toPoly.checked) then
-- (
-- convertto c editable_poly
-- )
o.name = "deleteme"
)
j+=1
if j > count then count = 0
)
)
delete $*deleteme*
tend = timestamp()
print ("done: finished. Total time: " + ((tend-tstart)/1000.0) as string + "s")
)
on 1000 objects: “done: finished. Total time: 1.222s”
on 5000 objects: “done: finished. Total time: 27.843s”
weird
Interesting…
Here is an optimized version:
(
local tstart = timestamp()
local thearr = objects as array --shorter way to collect, alternative is 'for o in objects collect o'
with undo off
(
local toDelete = for o in thearr collect --while looping, collect what is to be deleted!
(
if findItem geometryclass.classes (classof o) > 0 then --see below for explanation!
(
local c = snapshot o
c.name = o.name --this is fast, probably because the object was just created
c.parent = undefined
o --returns the object to collect
)
else dontcollect --otherwise don't collect
)
delete toDelete --delete the collected originals
local tend = timestamp()
"done: finished. Total time: " + ((tend-tstart)/1000.0) as string + "s"
)
)
1000: 0.331s
5000: 2.145s
10000: 5.627s
The strange Geometry test I used up there is a special workaround for cases where the geometry object happens to be a PFlow or a complex plugin with a heavy modifier stack. Calling superclassof() causes a reevaluation of the stack on some objects which can take a looong time, because Max has to figure out what the class is on top of the stack. Using classof() does not cause that. So using superclassof() and the built-in collections Lights, Geometry, Helpers etc. is a bad idea when writing performance code.
The only part of your code that is really slow (and it is not even directly your fault) is the renaming of the existing object o. Renaming existing objects is very very slow, but funnily enough, renaming the snapshot c is not (there must be reasons for that, deep down in the code paths of Max). So even a minor change to your existing code could bring it down to 2 seconds for 5000 objects:
(
tstart = timestamp()
thearr = #()
for o in objects do append thearr o
j = 1
count = thearr.count
toDelete = #()
with undo off
(
while count > 1 do
(
o = thearr[j]
if superclassof o == geometryclass then
(
c = snapshot o
c.name = o.name
c.parent = undefined
append toDelete o
)
j+=1
if j > count then count = 0
)
)
delete toDelete
tend = timestamp()
print ("done: finished. Total time: " + ((tend-tstart)/1000.0) as string + "s")
)
EDIT: The above tests were performed with free unparented objects. Manipulating the scene hierarchy when some of the geometry is parented to a helper is a completely different story – the performance goes down again. I suspect that renaming and unparenting cause similar updates under the hood. So my examples above would not solve the problem if you are working with hierarchies, sorry…
there is the nasty bug in versions before max 2012 (maybe before 2011, but I haven’t had a chance to check it)…
it’s very slow get unique name function. it’s stupidly slow!
the code (which is the optimized code above) for 2010 and 2012 works very different:
undo off
(
t1 = timestamp()
origins = geometry as array
for obj in origins do
(
node = snapshot obj
node.name = obj.name
node.parent = undefined
)
delete origins
format "time:%
" (timestamp()-t1)
)
for 5000 nodes:
max 2010 – time = 43.5 s
max 2012 – time = 1.5 s
the bottleneck for 2010 is node = snapshot obj
this moment max looks for an unique name for the new node. and it kills max 2010.
in max 2012 the bug was fixed, and you can see the result.
but following their original style max developers broke something else, and now the node
creation with defined name works slower than without…
well. the best equivalent for function above that i found for max 2010 is:
undo off
(
t1 = timestamp()
origins = geometry as array
maxops.clonenodes origins actualNodeList:&origins newNodes:&new
for k=1 to origins.count do
(
converttomesh new[k]
new[k].name = origins[k].name
new[k].parent = undefined
)
delete origins
format "time:%
" (timestamp()-t1)
)
– time = 2.4 s
Bobo and DenisT, thanks so much for your help on this!!
So, today in a strange twist of fate my trial of Max 2012 ran out today and the network reinstall would not see the license. So, I reconverted all my cad files to Max 2011 and ran the scripts. Now I have a scene will all the geometry (52,000 objects @ 6M polys) and the viewport speed is fantastic as are the render times.
Starting to look like Max2012 was the primary bottleneck.
Thanks again!!
Dave
More testing, this time on 2011
I commented out the editable_poly and c.parent = undefined check and am getting 45 second run time on 24000 objects. Totally acceptable. Thanks guys!!