the test I posted is with 1000 nodes and random distribution but I’ve gotten similar results on different amounts (tested max 5000) and all distribution options.
Will try to continue optimizing…
For 1000 nodes the qsort takes me 4ms … doesn’t sound like it is the bottleneck.
I just wanted to say that even while I’m getting my a** kicked here I really enjoy these challenges Very informative and i’m learning a lot here!
Yes, my check of
if nodes[k+2]!=undefined and nk.numVerts >= nodes[k+2].numVerts do k += 1
makes it redundant to sort the array more than once…
Running the qsort every cycle actually results in more total updates, not less.
it shouldn’t be. but the idea is right. we have to minimize number of updates. Updates == Poly rebuild.
it’s why linear max method works so slow.
lets say we have 100 verts node for attach to, and 2 one vert nodes to attach.
max attaches first node: 100+1 updates
after that it attaches second: 101+1 updates… all together 202 updates…
we can do it:
attach first to second: 1+1
attach result:100+2… together 104 updates.
the way I see it the goal is basically to always be attaching the two objects in the array with the lowest vertex count.
I believe my algorithm maintains that goal, but will have to check again tomorrow.
Haven’t had much time (E3 almost here ), but here are some things I’ve tried,
[ul]
[li]tried to hijack the Mesher compound object, to no avail[/li][li]been musing over possible Max systems that could attach things faster than any Maxscript, haven’t figured that one out yet,[/li][li]traditional method of attaching with different structures/code layout, nothing from that, all way slow[/li][li]had an “Unknown System Exception” a few times, but it was just how I was collecting/attaching the arrays, so that one was my fault :D[/li][/ul]Haven’t tried much clustering, since that is what has already been done a lot,
More to try!
So further testing has shown my previous algorithm does not always attach the two lightest objects.
I’ve come up with this algorithm, which does always attach the two lightest, resulting in the minimal possible amount of updates.
Unfortunately, due to its added complexity it is a bit slower than my previous algorithm and just about the same speed as cluster attach. Perhaps I am overlooking a place where it can be optimized further.
@Denis: Out of curiosity… by how much, in percentage, is your algorithm faster than cluster attach?
fn customAttach source nodes pb:pb1 =
(
insertitem source nodes 1
local count = nodes.count
local att = polyop.attach
local vertArr = for o in nodes collect o.numVerts
local sortedVertArr = deepcopy vertArr
for i = 1 to nodes.count-1 do if not keyboard.escpressed do
(
pb.value = 100 - nodes.count*100./count
sort sortedVertArr
local a = findItem vertArr sortedVertArr[1]
vertArr[a]=-1
local b = findItem vertArr sortedVertArr[2]
local nA = nodes[a]
att nA nodes[b]
sortedVertArr[1]=vertArr[a]=nA.numVerts
deleteItem vertArr b
deleteItem sortedVertArr 2
deleteItem nodes b
verts += nA.numverts
)
nodes[1]
)
Create. Time: 1.181 Memory: 1240104L First: p0 Last: p999 Random Range Dist.
#cluster: with undo:on Time:2.172 Memory:72192L Updates:5567158
#this algorithm: with undo:on Time:2.14 Memory:252192L Updates:5256184
#prev. algorithm: with undo:on Time:2.099 Memory:72128L Updates:5297612
5-20%… my algorithm works better for large amount of nodes with big difference in vertex amount.
Lo, if I understand Denis correctly I believe he is saying that there is a sort method that can be included in the while loop that is faster than the logic you require in your algorithm. Although I can’t figure out what sort method other then qsort should be used, here is a simple example:
fn attachObjs arr =
(
fn qsortFn obj1 obj2 = obj1.numVerts - obj2.numVerts
qSort arr qsortFn
local att = polyop.attach
while arr.count > 1 do
(
att arr[1] arr[2]
deleteItem arr 2
qsort arr qsortFn
)
arr[1]
)
In my tests it is faster than Lo’s methods when dealing with object counts < 200 with varying vertex amounts. Once the object count goes over 200 then Lo’s method wins out because the qsort becomes a liability.
How about this? I’ve written a c# assembly that calculates the optimal order of attachment, but while getting the optimal order is very fast, the attach loop which is now very short is still slower! I can’t really explain the reason for this, can anyone else?
The total update count shows the minimal amount of updates are being used.
c# assembly:
fn createAssembly =
(
source = ""
source += "using System;
"
source += "using System.Collections;
"
source += "namespace MegaAttach
"
source += "{
"
source += " public class OptimalOrder
"
source += " {
"
source += " public static Int32[] FindOrder(Int32[] verts)
"
source += " {
"
source += " Int32[] SortedVerts = (Int32[])(verts.Clone());
"
source += " Int32[] result = new Int32[verts.Length*2];
"
source += " for (int i = 0; i < result.Length; i+=2)
"
source += " {
"
source += " Array.Sort(SortedVerts);
"
source += " int a = Array.FindIndex(verts, delegate(Int32 v) {return v ==SortedVerts[0];});
"
source += " int vA = verts[a];
"
source += " verts[a] = -1;
"
source += " int b = Array.FindIndex(verts, delegate(Int32 v) {return v ==SortedVerts[1];});
"
source += " result[i] = a+1;
"
source += " result[i + 1] = b+1;
"
source += " SortedVerts[0] = verts[a] = vA + verts[b];
"
source += " SortedVerts[1] = verts[b] = Int32.MaxValue;
"
source += " }
"
source += " return result;
"
source += " }
"
source += " }
"
source += "}
"
csharpProvider = dotnetobject "Microsoft.CSharp.CSharpCodeProvider"
compilerParams = dotnetobject "System.CodeDom.Compiler.CompilerParameters"
compilerParams.ReferencedAssemblies.AddRange #("System.dll")
compilerParams.GenerateInMemory = on
compilerResults = csharpProvider.CompileAssemblyFromSource compilerParams #(source)
if (compilerResults.Errors.Count > 0 ) then
(
errs = stringstream ""
for i = 0 to (compilerResults.Errors.Count-1) do
(
err = compilerResults.Errors.Item[i]
format "Error:% Line:% Column:% %
" err.ErrorNumber err.Line \
err.Column err.ErrorText to:errs
)
MessageBox (errs as string) title: "Errors encountered while compiling C# code"
format "%
" errs
return undefined
)
assembly = compilerResults.CompiledAssembly
assembly.CreateInstance "MegaAttach.OptimalOrder"
)
global attachClass = createAssembly()
usage:
fn customAttach source nodes pb:pb1 =
(
insertitem source nodes 1
local att = polyop.attach
local optimalOrder = attachClass.FindOrder (for o in nodes collect o.numVerts)
local count = optimalOrder.count-2
for i = 1 to count by 2 do if not keyboard.escpressed do
(
pb.value = i*100./count
local nA = nodes[optimalOrder[i]]
att nA nodes[optimalOrder[i+1]]
verts += nA.numverts
)
nodes[1]
)
i really like your idea to use pre-ordered node list! why is it slow? check the count of pre-ordered list, check how much takes to convert c# array to mxs array, check how much time the system needs to take a pointer to node from array, don’t make local copy of pointer inside loop…
i can just guess. i’ll be out of max for a week.
I envy to you guys that you have the free time to join this cool challenges last week. I don’t even find a time to read them carefully
I think you hit the base ‘stone’ that limit the benefit of .net in Max. While the 1st challenge shows the power of .net in Max, it still a speculative proof. Don’t take me wrong, am too a .net lover, but there all was ‘greet’ because was sended single value to .net but here sending huge amount of values (array of 1000 elements), that’s slow.
Ah, even while typing, Denis allready answer
Not sure if it’d be of any help to anyone for this contest, but for a learning experiment, I implimented a shell sort function to test against Maxscript sort, and as you could guess, sort was way faster.
Of course, I could have just written it wrong, though it seems to sort things correctly, just slowly.
(
rollout roSortingTests "Sorting Tests"
(
button btnSortTest "ShellSort vs Qsort"
group""
(
radioButtons rbSortMethod "Choose Test:" labels:#("Sort", "Shell Sort") columns:2 default:1 align:#center
)
spinner spnNumberArraySize "Array Size:" width:100 height:16 range:[1,50000,5000] type:#integer align:#center
label lbSortResult ""
fn shellSort whatArr =
(
local val = 1
local temp = undefined
local outer = undefined
local inner = undefined
while (val <= whatArr.count / 3) do
val = val * 3 + 1
while (val > 1) do
(
for outer = val to (whatArr.count - 1) do
(
temp = whatArr[outer]
inner = outer
while ((inner > val ) and whatArr[inner-val] >= temp) do
(
whatArr[inner] = whatArr[inner - val]
inner -= val
)
whatArr[inner] = temp
)
val = (val - 1) / 3
)
)
on btnSortTest pressed do
(
gc()
local myIntArray = #()
for i = 1 to spnNumberArraySize.value do
append myIntArray (random 1 spnNumberArraySize.value)
local theCurrTime = timeStamp()
if (rbSortMethod.state == 1) then
sort myIntArray
else
shellSort myIntArray
local endTime = timeStamp()
if (rbSortMethod.state == 1) then
lbSortResult.text = ("Sort sort took " + ((endTime - theCurrTime) as string) + "ms." )
else
lbSortResult.text = ("Shell sort took " + ((endTime - theCurrTime) as string) + "ms." )
--for j = 1 to myIntArray.count by 100 do
--print j
OK
)
)
CreateDialog roSortingTests style:#(#style_toolwindow, #style_resizing, #style_sysmenu) fgcolor:([0,255,64]) bitmap:
undefined
)