To make it even more real-life and obvious:
(
local obj = convertToPoly (Plane lengthSegs:1000 widthSegs:1000)
st = timestamp(); sh = heapfree
local selectedVerts = polyop.getVertSelection obj
for i = 1 to 100 do
for j in selectedVerts do ()
format "time:% ram:%
" (timestamp()-st) (sh-heapfree)
)
Max 2016: time:696 ram:248L
Max 2017: time:8734 ram:264L
I wouldn’t say that’s a bad code in any way, would you? I’d expect the bitarray contents to be changed in the for loop (for example subtracting verts you’ve already processed) but the important thing is that it doesn’t matter as long as the bitarray.count stays the same…
i always had a problem with bitarrays but with their leaking. so couple years ago i wrote my mxs extension to do all bitarray operations in-place. it solved my problem
but the performance issue is new…
No, it is exactly the same thing.
I just stored the bitarrays in an array to make it “real”. It is not the array what causes the slowdown, but iterating over the bitarray, even if it has only 1 number set.
(
st = timestamp(); sh = heapfree
element = #{1}
for j = 1 to 100000 do
(
for i in element do ()
)
format "time:% ram:% count:%
" (timestamp()-st) (sh-heapfree) element.count
st = timestamp(); sh = heapfree
element = #{10}
for j = 1 to 100000 do
(
for i in element do ()
)
format "time:% ram:% count:%
" (timestamp()-st) (sh-heapfree) element.count
st = timestamp(); sh = heapfree
element = #{100}
for j = 1 to 100000 do
(
for i in element do ()
)
format "time:% ram:% count:%
" (timestamp()-st) (sh-heapfree) element.count
st = timestamp(); sh = heapfree
element = #{1000}
for j = 1 to 100000 do
(
for i in element do ()
)
format "time:% ram:% count:%
" (timestamp()-st) (sh-heapfree) element.count
st = timestamp(); sh = heapfree
element = #{10000}
for j = 1 to 100000 do
(
for i in element do ()
)
format "time:% ram:% count:%
" (timestamp()-st) (sh-heapfree) element.count
)
MAX 2016
time:55 ram:216L count:1
time:59 ram:216L count:10
time:102 ram:216L count:100
time:528 ram:216L count:1000
time:4864 ram:216L count:10000
MAX 2017
time:90 ram:196L count:1 >> 1.63 slower
time:197 ram:196L count:10 >> 3.34 slower
time:1231 ram:196L count:100 >> 12.1 slower
time:11607 ram:196L count:1000 >> 22.0 slower
time:115296 ram:204L count:10000 >> 23.7 slower
Here is it in the shortest possible code.
Max 2017 is almost 30 times slower than any previous version.
(
st = timestamp()
for j = 1 to 10 do for i in #{1000000} do()
format "time:%
" (timestamp()-st)
)
MAX 2016
time:47
MAX 2017
time:1293
could you compare three two:
(
st = timestamp()
for j = 1 to 10 do for i in #{1000000} do()
format "time:%
" (timestamp()-st)
)
(
st = timestamp()
local bits = #{1000000}
for j = 1 to 10 do for i in bits do()
format "time:%
" (timestamp()-st)
)
please?
PS. add memory test too please
Here you go:
Max 2016
time:69 ms, ram:760L
time:69 ms, ram:184L
Max 2017
time:876 ms, ram:808L
time:867 ms, ram:196L
To answer my own question if this is a feature or a bug, this IS a bug to me, and I think it will be fixed.
If I had to guess, I would think that when iterating over a bitarray, it may internally go thru all the bits until the last one set instead of just iterating over the ones set.
Again, I have no proof of that, so we’ll have to wait for the developer’s word on this.
it always was like that. in sdk we iterate all bits from 0 to count. it was a bit-iterator, but it’s broken for all 64-bits versions.
I fixed it but it’s not faster. so iterate all bits internally is not a problem
one more time please:
(
st = timestamp()
for j = 1 to 10 do for i in #{1000000} do()
format "time:%
" (timestamp()-st)
)
(
st = timestamp()
local bits = #{1000000}
for j = 1 to 10 do for i in bits do()
format "time:%
" (timestamp()-st)
)
(
st = timestamp()
for j = 1 to 10 do
(
local bits = #{1000000}
for i in bits do()
)
format "time:%
" (timestamp()-st)
)
Max 2016
time:69 ms, heap:760L
time:70 ms, heap:184L
time:72 ms, heap:760L
Max 2017
time:877 ms, heap:808L
time:873 ms, heap:196L
time:869 ms, heap:808L
glancing over those numbers you guys come up with, i think that it’s inevitable that something has to be done. But the biggest slowdown here is with bitarrays – or am i wrong ? The other cases are in the range of 2x would somehow be acceptable IMHO. At least if it’s not some general code f*ckup issue in the changes they implemented with Max 2017
Yes, the biggest problem is iterating over bitarrays. The larger the number set the greater the performance drop.
While the loops performs 1.5X-2X slower in Max 2017, which is a big difference, it is insignificant compared with looping a bitarray, which as far as we could measure could be in the range of 25X slower in Max 2017.
Someone must report it as bug… Altho the dev said its not bug, in term of bug… But its slow down in performance. Need to be cleared.
Please guys report that BitArray slowdown as bug. With specific numbers and your example codes …
Swordslayer already posted on TheArea->3ds Max Programming forum about the loops issue and got direct reply from Kevin Vandecar.
http://forums.autodesk.com/t5/3ds-max-programming/slow-for-loop-in-3dsmax-2017/td-p/6400380
The later posts about the bitarray issue is still unreplied though
So i hope this will not fall under the radar, but it has to be reported via the public defect form for 3ds Max
http://download.autodesk.com/us/support/report_a_bug.html?SelProduct=3dsMax
Why do you think it’s a bug? This is exactly the result of the ‘safety improvements’ with max 2017. Bug is unexpected mistake that can be fixed. This one is a system mistake of code implementation. This feature has to be reviewed and probably re-implemented to make it works well.
Well that i’d call nitpicking – what does it matter wether it’s formally a bug or a flawed implementation `?
You can discuss the thing here up and down all day long but do you think that will lead to the issue getting fixed ?
i just want to understand what we want to report as a bug? performance slowdown? or that bitarrays are still not collectable in the local scope? which is a bug and can be fixed without fixing the slowdown issue.
BTW. i’m absolutely sure there is the same slowdown issue related to the using of arrays.
I think nobody disputes that, it’s just that given that arrays have a fixed count of items instead of true/false values, the slowdown is in case of arrays is by a factor of 1.5, not more. Iterating a several-million-item array ten or more times already takes a few seconds in max 2017, unlike iterating over an empty several-million-count bitarray, so few people will complain about that or give it as an example of unacceptable behavior, but the performance gap with bitarrays is easy to demonstrate and is huge.