Notifications
Clear all

[Closed] MXS 2017 'For Loop' performance

Yes, please report as bug.
The more report is better.

please check this:

(
	st = timestamp()
	sh = heapfree
	local almostEmptyArray = #()
	almostEmptyArray.count = 1000 

	for i = 1 to 10000 do
		for j in almostEmptyArray do ()

	format "time:% ram:%
" (timestamp()-st) (sh-heapfree)
)

1 Reply
(@swordslayer)
Joined: 11 months ago

Posts: 0

Max 2016: time:1570 ram:192L
Max 2017: time:2638 ram:204L

and exactly the same result when array count is 1000000 and loop count 10.

we put three different things in one bag…

there are three issues we found:

#1 – slowdown of a simple loop

(
        t = timestamp()
        h = heapfree
        for k=1 to 1000000 do ()
        format "time:% heap:%
" (timestamp() - t) (h - heap)     
) 

(which is not a bug IMHO. it’s badly realized new feature). It might be fixed.

#2 – still not freeing of bitarrays(array) in local scope:

(
        t = timestamp()
        h = heapfree
        for k=1 to 100000 do 
        (
                local a = #{1..10000}
        )
        format "time:% heap:%
" (timestamp() - t) (h - heap)     
)

(the matrix3 and point3(4) are fixed. It’s NOT a bug. It’s just not found solution of how to do it)

#3 – dramatically slowing down of bitarray iterator:

(
        t = timestamp()
        h = heapfree
        a = #{1000000}
        for k in a do ()
        format "time:% heap:%
" (timestamp() - t) (h - heap)     
)  

this is the BUG. And it must be fixed

http://forums.autodesk.com/t5/3ds-max-programming/slow-for-loop-in-3dsmax-2017/m-p/6409291#M14407

I never would have thought of such analytical, reflexive and profound response.
I am impressed!

Indeed Kevin Vandecar is some kind of silent Max SDK hero at Autodesk in my eyes

Stumbled over his blog a couple of years ago
http://getcoreinterface.typepad.com/

not very active, but at least it’s out there …

3 Replies
(@denist)
Joined: 11 months ago

Posts: 0

I would try to make it perfect…

The complete mxs callstack mechanism was not very well implemented since the Max version 2017. In the recent version we implemented the new mechanism which significantly changed the performance. We found that in some our internal tests we have much better performance using the new mechanism. However to keep legacy consistency we added an optional context

whit [B]pleaseDontDoIt[/B] on

But we found that using of this context doesn’t make the performance better. So we recommend don’t use this context, or as a possible solution break down your code in smaller try/catch blocks, or tag whole code block with only one try/catch expression. One or another has to work better.

(@reform)
Joined: 11 months ago

Posts: 0

Did you notice my other post? I can’t use their recommended method to improve try()catch() performance as it isn’t backwards compatible with <2017, even if I do a version check before executing the code.

if ((maxVersion())[1] / 1000) > 18 then (with MXSCallstackCaptureEnabled off try(format “test”)catch())

(@polytools3d)
Joined: 11 months ago

Posts: 0

Hello Patrick,

That is a different issue, not directly related to the ‘For Loop” performance.

I would open an new Thread to discuss it further, as this Thread already have a lot of information, especially for those who need to read it from the beginning.

Also we want to make clear the issue of ‘dropping of performance’ on BitArray enumeration. We found that in some situation the performance might be lower than in all previous versions, but it still much better than any crashing of the Max. So the dropping down the performance up to 10 times can’t take a significant amount to overall Max stability.

1 Reply
(@polytools3d)
Joined: 11 months ago

Posts: 0

Up to 25 times to be correct (as far as what I could measure)
Anyway, 10-25 times drop in performance is nothing compared with a safe environment.
After all, what is the difference between playing a game at 120 or 5 FPS. I would rather play all the games at 5 FPS if it is assured they will not crash.

could anyone test it against 2017?

(
	a = #{}
	seed 0
	for k=1 to 100000 do append a (random 1 10000000)
	t = timestamp()
	h = heapfree
	for x in (a as array) do ()
	format "ARRAY >> count:% numberset:% time:% heap:%
" a.count a.numberset (timestamp() - t) (h - heapfree)
)
(
	a = #{}
	seed 0
	for k=1 to 100000 do append a (random 1 10000000)
	t = timestamp()
	h = heapfree
	for x in a do ()
	format "BITS  >> count:% numberset:% time:% heap:%
" a.count a.numberset (timestamp() - t) (h - heapfree)
)

this is what i have at 2014 so far:

ARRAY >> count:9999896 numberset:99480 time:67 heap:5575160L
BITS  >> count:9999896 numberset:99480 time:129 heap:120L

as you can see the (conversion to array + array iteration) is still two time faster than bits iteration

using this finding i’ve wrote a mxs bitIterator class. Its combines construction and map at the same time…

so the

for x in bititerator(bits) do (....)

is more that 2 times faster for many cases than

for x in bits do (....)

where bits is a bittArray

bititerator doesn’t take any heap memory and it’s NEVER slower than bitarray iteration.

MAX 2016
ARRAY >> count:9999896 numberset:99480 time:62 heap:5726024L
BITS >> count:9999896 numberset:99480 time:72 heap:120L

MAX 2017
ARRAY >> count:9999896 numberset:99480 time:81 heap:6101380L
BITS >> count:9999896 numberset:99480 time:1209 heap:128L

1 Reply
(@denist)
Joined: 11 months ago

Posts: 0

ha-ha-ha! but anyway…

81 vs 1209 can’t be a significant case, can it?

Do you really think I was serious answering the question? it was just a joking.

Page 6 / 9