[Closed] script slowdown on making polys
I have a script that rebuilds polys based on another object. I’m using polyop.getFaceVerts and polyop.createPolygon.
The further it gets thru an object, the slower it goes. Can anyone give me an idea as to why this might be cause when it gets above the 30k polys, it becomes prohibitively slow to use unless you break the original object up into pieces.
Here are some poly counts Vs time.
3665 polys = 2 secs (1832 polys a sec)
7330 polys = 8 secs (916 polys a second)
29320 polys = 141 secs (208 polys a sec)
Cheers,
Cg.
Do you use undo? That can seriously slow down things.
So a with undo off context could come in handy.
Other then that a sample code could be handy showing the problem.
-Johan
Hey Johan, no I’m not using undo on this one. I tried briefly, but it made a 7Gb swapfile before max crashed
this is the code, very simple:
for i = 1 to faceCount do
(
faceVerts = polyop.getFaceVerts oldPoly i
polyop.createPolygon newPoly faceVerts
)
I did try this:
for i = 1 to faceCount do
(
polyop.createPolygon newPoly polyop.getFaceVerts oldPoly i
)
but it was slower still
Maybe try a garbage collect “gc” or “gc light:true” every 4000th cycle.
if mod i 4000 == 0 then gc
Haven’t tested it, behind my too slow laptop now…
-Johan
Hi Chris,
bad news from me. I made some reserach and MaxScript tests with no luck. The script doesn’t scale linearly.
The method you’re using ‘polyOp.getFaceVerts’ is quite straightforward on internal data structure side, it simply asks to the specified face what are the verts defining it. It doesn’t have to cycle over the whole mesh like other methods.
By looking at how an Editable Poly Mesh is structured (MNMesh in sdk), it stores a bunch of int arrays and Tab<int> arrays to get the relations between verts, edges and faces. Every time you add a polygon with ‘polyOp.createPolygon’ all these arrays have to be updated or incremented. In the latest case, since standard arrays aren’t dynamic structures in C++, they are completely rewritten somewhere else, where a chunk of contiguous memory big enough to store each of them found. I guess the increasing time for each operation depends on this, and can’t find a way to avoid it by MaxScript. Anyway, I hope to be wrong.
Here is my test script, assuming ad Editable Poly is currently selected. Note that undo and redraw context in this case don’t make any difference, because first and last brackets define a single code block.
(
if ( (selection.count == 1) and ((classOf selection[1]) == Editable_Poly) ) then
oldPoly = selection[1]
local iNumFaces = polyOp.getNumFaces oldPoly
local iNumVerts = polyOp.getNumVerts oldPoly
-- create and empty Editable Poly
local newPoly = convertToPoly(plane())
newPoly.name = oldPoly.name + "_copy"
polyOp.deleteVerts newPoly #{1..(polyOp.getNumVerts newPoly)}
-- create verts in newPoly
undo off
(
with redraw off
(
for i = 1 to iNumVerts do
polyOp.createVert newPoly ( (polyOp.getVert oldPoly i) * inverse(oldPoly.transform) )
)
)
gc()
local startTime = timeStamp()
-- create polys in newPoly
undo off
(
with redraw off
(
for j = 1 to iNumFaces do
polyOp.createPolygon newPoly (polyOp.getFaceVerts oldPoly j)
)
)
local stopTime = timeStamp()
format "Processing took % seconds
" ((stopTime - startTime) / 1000.0)
)
- Enrico
Assumes a editable poly is selected.
It’s simply breaking the process in pieces of theSplit sized face chunks and attaches and welds them in the end.
Measured like Enrico did on a 8192 faces sphere:
Enrico : Processing took 7.781 seconds
JHN : Processing took 1.031 seconds
40k faces: Processing took 18.235 seconds
If you would enable the format statements you’ll see a pretty consistent flow of face creation, with this you can balance out chuck size / object amount. I can imagine further enhancements can be made.
(
local fc = $.numfaces
local vc = $.numverts
local check = 1000 -- spit something to the listener
local theSplit = 200 -- new object every theSplit faces
local theArr = #() -- will contain the subparts
-- Clean polygon
local np = convertTo (editable_mesh name:(uniquename "EPoly")) (Editable_Poly)
-- Cache face methods
local pgF = polyop.getFaceVerts
local pcF = polyop.createPolygon
-- Cache poly methods
local pgV = polyop.getVert
local pcV = polyop.createVert
-- Build Verts object
for i = 1 to vc do
(
pcV np (pgV $ i)
-- if mod i check == 0 then format "verts %:%
" i vc
)
-- Use a copy
local thePart = copy np
thePart.name = UniqueName ($.name + #Part_00)
local startTime = timeStamp()
-- Build face array
undo off
(
with redraw off
(
for i = 1 to fc do
(
if mod i theSplit == 0 then
(
append theArr thePart -- append part
thePart = copy np -- operate on new poly
thePart.name = UniqueName ($.name + #Part_00) -- rename
-- gc() -- should time this
)
pcF thePart (pgF $ i)
-- if mod i check == 0 then format "faces %:%
" i fc
)
append theArr thePart -- append last part
local main = theArr[1]
for i = theArr.count to 2 by -1 do polyop.attach main theArr[i] -- attach parts to first part, last first
polyop.deleteIsoVerts main -- remove all iso verts
main.weldThreshold = 0.01 -- low treshold
polyop.weldVertsByThreshold main #{1..theArr[1].numVerts} -- weld all
)
)
local stopTime = timeStamp()
format "Processing took % seconds
" ((stopTime - startTime) / 1000.0)
select main -- select objects
)
-Johan
Well done Johan,
breaking processes into smaller pieces is always beneficial. It was an issue discussed in this useful thread: http://forums.cgsociety.org/showthread.php?f=98&t=635477 , about attaching meshes.
I was thinking about sdk, where you can set the MNMesh arrays size before actually filling them, that’s impossible in mxs, so I lost the right point of view
- Enrico
Well actually you can:
theArr = #()
theArr[200] = ""
theArr.count
So you could presumably presize an array and maybe gain some speed there…
My time with the SDK is something of the future, although I would like to…
-Johan
Yes, you can presize an array. Unfortunately I made some research about the impact on speeds and it seems to be something barely noticeable. I guess it is in the MaxScript Help from a long time and proven to be true for older 3ds Max releases.
<completely clueless statements>I could be totally wrong about this, but in my opinion arrays in current MaxScript are internally represented by linked lists dynamically built. That would explain how you can store mixed data types in them (allright, there’s the Value wrapper), grow and shrink as you like. If that’s true, presizing hasn’t real meaning, because it would create a linked list of dummy objects, to be replaced when real ones where assigned. But again, I could be completely wrong.</completely clueless statements>
I was referring to MNMesh (Editable Poly base structure) internal arrays, you can set the amount of verts and polys before defining them, but in that case I guess it goes the same way: a new array of specified size is stored somewhere with older data copied. When you need to add data, it avoids the process of copying again and again, since space is already there.
If you want to bite the sdk, relax, take a deep breath, then take another one… and prepare yourself to be overwhelmed by a huge monster yelling at you in an unfathomable language
All kidding aside, I had to study C++ to a decent level for some months, only after that I actually opened the sdk help, and after other months spent over it, I still feel like a green freshman. You need to be really wilful.
“But remember, the brick walls are there for a reason. The brick walls are not there to keep us out. The brick walls are there to give us a chance to show how badly we want something. Because the brick walls are there to stop the people who don’t want it badly enough. They’re there to stop the other people.” – Professor Randy Pausch.
Johan, yes I’d thought of the splitting it up into seperate, more manageable sections. But the trouble is that I need these meshes to still have the same vert order for many reasons and when welding them back together I can’t see a way that I can maintain vert order. It’s such a shame though cause your way of rebuilding is so quick.
Thanks to both you and Enrico for your replies.