[Closed] Find self intersections
it’s probably nothing by speed and memory use but looks clear:
meshFaces -= faces
It is a bit slower and uses twice the memory in my case.
for f in faces do deleteItem meshFaces f
setup grid:6 ms mem:624L
offset mesh:4 ms mem:584L
intersect mesh:651 ms mem:53040L
total:664 ms mem:54368L
meshFaces -= faces
setup grid:6 ms mem:624L
offset mesh:4 ms mem:584L
intersect mesh:700 ms mem:98096L
total:713 ms mem:99424L
It looks like the same behavior as with point3 or integers where Max creates a new instance of it.
But in this case it seems the original bitarray is never modified, so items are not actually been removed when using -= in a loop. Could it be?
i've never used your trick with deleting already processed bits from processing bitarray:
like:
done = #{}
for f in faces do
(
ff = <grow> f
join done ff
for b in ff do deleteitem faces f
)
i usually just check a current bit against already done...
check this sample (the first is how you do it, the second is how i do):
(
delete objects
t = teapot segments:16
mesh = snapshotasmesh t
t1 = timestamp()
m1 = heapfree
verts = #{1..mesh.numverts}
processed = #{}
for v in verts do
(
vv = meshop.getvertsusingface mesh (meshop.getfacesusingvert mesh v)
append processed v
for v in vv do deleteitem verts v
)
format "#1 processed:% time:% memory:%
" processed.numberset (timestamp() - t1) (m1 - heapfree)
t1 = timestamp()
m1 = heapfree
verts = #{1..mesh.numverts}
done = #{}
processed = #{}
for v in verts where not done[v] do
(
vv = meshop.getvertsusingface mesh (meshop.getfacesusingvert mesh v)
join done vv
append processed v
)
format "#2 processed:% time:% memory:%
" processed.numberset (timestamp() - t1) (m1 - heapfree)
)
For some reason I stopped using -= for bitarrays to reduce the iterations in a loop. Perhaps I was creating the bitarrays to be removed on-the-fly at that time and never cared to look it in depth.
I cant really see what is wrong with this code, but the results are odd. Is it due to the creation of the bitarray to be removed?
(
------------------------------------------------------------------
array = #{1..10}
for j in array do (print j; array -= #{5..10})
print "-----"
array = #{1..10}
for j in array do (print j; for i = 5 to 10 do deleteitem array i)
------------------------------------------------------------------
last = 50000
seed 0
st = timestamp()
sh = heapfree
array = #{1..last}
deleted = #{}
for j in array do (
r = random 1 array.numberset
append deleted r
deleteitem array r
)
format "deleted:% time:% memory:%
" array.numberset (timestamp()-st) (sh-heapfree)
seed 0
st = timestamp()
sh = heapfree
array = #{1..last}
deleted = #{}
for j in array do (
r = random 1 array.numberset
append deleted r
array -= #{r}
)
format "deleted:% time:% memory:%
" array.numberset (timestamp()-st) (sh-heapfree)
seed 0
st = timestamp()
sh = heapfree
array = #{1..last}
deleted = #{}
for j in array do (
r = random 1 array.numberset
append deleted r
array[r] = false
)
format "deleted:% time:% memory:%
" array.numberset (timestamp()-st) (sh-heapfree)
)
I would have think than checking against a second array would be less efficient, but it seems its not. In fact your approach runs a little bit faster and uses the same memory.
if we change a condition it makes the difference bigger:
(
num = 50000
t1 = timestamp()
m1 = heapfree
verts = #{1..num}
processed = #{}
for v in verts do
(
seed v
vv = #{random v num, random v num}
append processed v
for v in vv do deleteitem verts v
)
format "#1 processed:% time:% memory:%
" processed.numberset (timestamp() - t1) (m1 - heapfree)
t1 = timestamp()
m1 = heapfree
verts = #{1..num}
done = #{}
processed = #{}
for v in verts where not done[v] do
(
seed v
vv = #{random v num, random v num}
join done vv
append processed v
)
format "#2 processed:% time:% memory:%
" processed.numberset (timestamp() - t1) (m1 - heapfree)
)
Awe-so-me! I will sure find a place to use this.
I wish that performance boost could be achieved in all cases.