[Closed] MS + DotNet MultiThreading with objects creation
Yeah,yeah DenisT fast attach method.
But why is this method slower then cluster. You use insertItem and deleteItem array methods and again is faser then simple for – loop.
fn attachSelections source nodes returnAs:#EMesh = --
(
local poAttach = polyop.attach
for i = nodes.count to 1 by -1 do poAttach source nodes[i]
if returnAs != #EPoly then convertToMesh source else source
)
fn clusterAttach source nodes =
(
k = 1
insertitem source nodes 1
attach = polyop.attach
while nodes.count > 1 and not keyboard.escpressed do
(
attach nodes[k] nodes[k+1]
deleteItem nodes (k+1)
k += 1
if (k+1) > nodes.count do k = 1
)
nodes[1]
)
gc()
t1 = timestamp()
m1 = heapfree
nodes = (selection as array) ; converttopoly nodes
source = nodes[nodes.count]
nodes.count = nodes.count-1
--clusterAttach source nodes
attachSelections source nodes returnAs:#EPoly
format "cluster >> time:% memory:%
" (timestamp() - t1) (m1 - heapfree)
the fictitious speed of proboolean attach is because we don’t do any mesh updates before the end of attaching. but after that we have to rebuild all normals, ids, smoothing groups, etc. and that’s a necessary part of the process. so we have to include update time to calculation.
Your test is rigged There is no need to convert to poly here… so without that:
boolean >> time:31 memory:19328L
OK
cluster >> time:153 memory:19392L
OK
this is not true again… you don’t need to convert to poly, but you must update the pro boolean object. so we have to include this time to calculation as well. ok. change converttopoly to completeredraw for example…
Ahh, my bad… I found the timings to a bit one the fast side. I wonder how both scale and if there is a tipping point somewhere…
btw… I was able to optimize it a bit and get rid of the for/next loop by directly feeding it an array of nodes:
fn booleanAttach source nodes =
(
ProBoolean.createBooleanObjects source nodes 4 2 1
completeredraw()
source
)
But cluster remains faster by a factor of around 3x to about 1.5x when you increase the node count and then it levels off…
i’ve discussed fast attach methods many times on this forum… there is no wonder in this subject.
the speed of a method is proportional to number of mesh updates.
let’s say we have to attach 8 nodes with 1 poly in its mesh.
poly object updates its mesh after every attachnemt.
so here is a linear method and number of updates:
1 + 1 = 2 updates
2 + 1 = 3 updates
…
7 + 1 = 8 updates
all together is (2+3+4+5+6+7+8) = 35 updates
here is a cluster method:
1 + 1 = 2
1 + 1 = 2
1 + 1 = 2
1 + 1 = 2
2 + 2 = 4
2 + 2 = 4
4 + 4 = 8
all together is (2+2+2+2+4+4+8) = 24 updates
so we can write some formulas that calculates number of updates in our sample:
(the cluster formula is not correct but is good enough to show the difference. anyone free to post the correct one):
fn linearUpdates count =
(
count = count/2*2
local num = 0
for k=1 to count-1 do num += k+1
num
)
fn clusterUpdates count =
(
count = count/2*2
local n, a = 2, num = 0
while (n = count/a) > 0 do
(
for k=1 to n do num += a
a *= 2
)
num
)
what we have:
linearUpdates 8
35
clusterUpdates 8
24
not a big deal, isn’t it?
well…
linearUpdates 100
5049
clusterUpdates 100
552
oops… ~9 times difference
linearUpdates 10000
50004999
clusterUpdates 10000
123456
wow! ~405 times difference.
but the funniest thing about everything told above is that we really need only 10000 updates to attach 10000 one poly nodes.
of course we have to understand that it’s not a linear proportion between number updates and speed. poly object does do update smarter than primitive. it has some cached local data which updates faster than newly added.
jonadb : Very impressive your script !
Waht you could do is save your scene 8 times, each with a different name that includes a framerange but before you do that set a startupscript that extracts the frameranges from the filenames and starts the calculation for them after which it automatically saves the file.
So you get 8 files like this:
file_0_200.max
file_201_400.max
etcStart Max via shelllaunch, each loading a different file… the startup script kicks in and performs the operation on set ranges and saves when done. Merge results into on file and you’re done
Exactly what i’m planning to do I already started , thank you again for your help
Thx! but there was a slight timing measurement issue… while it is fast, DenisT’s clusters script is faster and scales better with large object counts.
Yeah it seems to be the fastest, i’ll change the boolean method after the 8*max script launch
this is not my original idea. it was shown first time on this forum by ivanisavich
http://forums.cgsociety.org/showpost.php?p=6704788&postcount=1