[Closed] Leaking memory with large arrays
Basically, I’m trying to do a Delaunay triangulation (based on Paul Bourke’s algorithm) in MAX 2009 64-bit. I start off by reading all the vertices into an array witch is then sorted along the x-axis. This takes a little while (800k+ points) and spends a bit of memory, but the real problems start when I’m building new arrays of faces and edges. Pseudo-code:
points=all vertices
triangles=initial triangle
done_triangles=#()
edges=#()
for i in 1 to points.count do
(
for j in triangles.count to 1 by -1 do
(
if points[i] is within circumcircle of triangles[j] then
(
append edges all three edges of triangles[j]
deleteItem triangles j
)
else if circumcircle of triangles[j] is entirely to the left of points[i] then
(
append done_triangles triangles[j]
deleteItem triangles j
)
)
iterate through edges and remove all edges that appear twice
for (remaining edges) do append triangles for each edge + points[i]
)
for i in (any remaining triangles) do append done_triangles triangles[i]
new_obj=mesh vertices:points faces:triangles
There is also a function to calculate circumcircle center and radius. This code leaks memory at a phenomenal rate; I have 12 gigs of RAM, and it’s all gone by the time I reach around 20k points. I have turned undo off, and tried using gc(), but the problem persists.
As far as causes for memory leaks go, I can only see the two main arrays (triangles and edges) and their append and deleteItem methods.
I’ve updated MAX with the latest service packs and hotfixes, but interestingly, when I try the code in MAX 2010, there is no memory leak… but it’s dead slow!?
Does anyone have any ideas?
Hi, as far as I know, append() and deleteItem() cause the complete rewriting of updated array in memory, thus the great amount of memory consumption and growing calculation timings. Calling gc() would help but is quite a slow task in itself.
When I have to deal with large Arrays, I usually create a control BitArray of the same size to check, which is much faster, before getting actual data, but don’t know if your case apply. I cannot give advices without looking at the code, but I guess you’re not comfortable publishing it.
- Enrico
Well, they certainly do something that uses an awful lot of memory ;-). gc() does nothing to reclaim memory, regardless of speed issues, so it feels like a bug (specially considering max 2010 handles it, albeit very slowly).
…well, only because I’m such a messy coder (and resort to obscure variable names in my own tongue). I might try to clean it up and post it, if necesary.
I never meant to be mean, and would totally respect the fact you want to keep your work private. I think it would be hard to read your variable names, at least for me :curious: It’s difficult to guess without reading the code. In example there are some Editable Poly methods that could eat up memory, used straight as they are, but can be much improved with aliases. You can take a look at this test. As a general rule, try to cache every data that’s constant during processing: if you got to get the position of the same vertex more than once, save it in an array on first and query that the other times.
- Enrico
also if you know the maximum number of items that will be in any given array, pre-defining the length of your array can help speed thing up and if you’re using 800k items, i’m sure the performance increase would be great.
example of pre-defining array size:
-- predefine an array to 1000 items
myArray = #()
myArray[1000] = undefined
myArray.count -- should be 1000
you’ll likely have to re-work your code a bit to accommodate this but it shouldnt be too complex
The two “dynamic” arrays don’t have a final size as such, they grow and shrink as the algorithm moves along. There is no logical connection between a given new triangle and a particular index in the array, so I don’t see how I could make a presized array work for me, short of manually coding a linked list, which is kindof what the maxscript “array” datatype is supposed to do for you in the first place? Or am I wrong about this?
Anyways, I tried to clean up the code, so here it is:
-- Credit to Paul Bourke (paul.bourke@uwa.edu.au) for the original Fortran 77 Program.
-- Check out: [ http://local.wasp.uwa.edu.au/~pbourke/ ]( http://local.wasp.uwa.edu.au/~pbourke/)
-- You can use this code however you like providing the above credits remain in tact.
-- Maxscript adaptation by Steinar Vatne, Rambøll Mapping, 2009
fn CIC t1 t2 t3=
(
epsilon=0.000001d0
test1=abs (t1.y-t2.y)
test2=abs (t2.y-t3.y)
xc=0d0
yc=0d0
radius=0d0
if (test1<epsilon) and (test2<epsilon) then return false
if (test1<epsilon) then
(
m2=-(t3.x-t2.x) / (t3.y-t2.y)
mx2=(t2.x+t3.x)/2.
my2=(t2.y+t3.y)/2.
xc=(t2.x+t1.x)/2.
yc=m2*(xc-mx2)+my2
)
else if (test2<epsilon) then
(
m1=-(t2.x-t1.x)/(t2.y-t1.y)
mx1=(t1.x+t2.x)/2.
my1=(t1.y+t2.y)/2.
xc=(t3.x+t2.x)/2.
yc=m1*(xc-mx1)+my1
)
else
(
m1=-(t2.x-t1.x)/(t2.y-t1.y)
m2=-(t3.x-t2.x)/(t3.y-t2.y)
mx1=(t1.x+t2.x)/2.
mx2=(t2.x+t3.x)/2.
my1=(t1.y+t2.y)/2.
my2=(t2.y+t3.y)/2.
xc=(m1*mx1-m2*mx2+my2-my1)/(m1-m2)
if (test1>test2) then
(
yc=m1*(xc-mx1)+my1
)
else
(
yc=m2*(xc-mx2)+my2
)
)
radius=length ([t2.x,t2.y]-[xc,yc])
return #(xc,yc,radius)
)
fn compareP p1 p2=
(
case of
(
(p1.x>p2.x):1
(p1.x==p2.x):0
default:-1
)
)
undo off
(
ePoly=$terreng
points=#()
p_num=ePoly.verts.count
if p_num<3 then exit
ep_b=ePoly.baseobject
po_g=polyop.getVert
for i in 1 to p_num do
(
append points (po_g ep_b i)
)
qsort points compareP
size=ePoly.max-ePoly.min
avg=(ePoly.max+ePoly.min)/2
maxl=((size.x+size.y)/2.)*2
append points [avg.x+maxl,avg.y,0.]
append points [avg.x-(maxl/2),avg.y+maxl,0.]
append points [avg.x-(maxl/2),avg.y-maxl,0.]
tri_buffer=#()
tri_final=#()
init_face=[points.count-2,points.count-1,points.count]
append tri_buffer #(init_face, (CIC points[init_face.x] points[init_face.y] points[init_face.z]))
FB_count=1 --Face buffer count
s_time=timestamp()
for i in 1 to p_num do
(
this_p=points[i]
format "Vertex % Count %
" i FB_count
edges=#()
for j in FB_count to 1 by -1 do
(
face=tri_buffer[j][1]
thisCIC=tri_buffer[j][2]
if thisCIC!=false then
(
if (length ([this_p.x,this_p.y]-[thisCIC[1],thisCIC[2]])<thisCIC[3]) then
(
append edges #(face.x,face.y)
append edges #(face.y,face.z)
append edges #(face.z,face.x)
deleteItem tri_buffer j
FB_count-=1
)
else if ((thisCIC[1]+thisCIC[3])<this_p.x) then
(
append tri_final face
deleteItem tri_buffer j
FB_count-=1
)
)
else
(
format "false: % %
" i points[i]
--remove point perhaps?
)
)
edge1=edges.count
while edge1 > 1 do
(
edge2=edge1-1
sorted_e=#(amin edges[edge1],amax edges[edge1])
while edge2 > 0 do
(
if (sorted_e[1]==(amin edges[edge2]) and sorted_e[2]==(amax edges[edge2])) then
(
deleteItem edges edge1
deleteItem edges edge2
edge1-=1
edge2=1
)
edge2-=1
)
edge1-=1
)
for j in edges do
(
append tri_buffer #([j[1],j[2],i], (CIC points[j[1]] points[j[2]] this_p))
)
FB_count+=edges.count
)
for i in FB_count to 1 by -1 do
(
append tri_final tri_buffer[i][1]
)
new_obj=mesh vertices:points faces:tri_final
points=undefined
tri_buffer=undefined
tri_final=undefined
gc()
format "Secs: %
" ((timestamp()-s_time)/1000.)
)
Input object is hard-coded and must be an editable poly. It doesn’t have to have faces, but it should have a good number of vertices if you want to see the memory leak in action. A plane with 100×100 segments converted to editable poly should do nicely.
I’ve tried to optimize a few things; I store the circumcircle calculation for each new triangle directly in the array, so it’s only calculated once for each triangle, and I’ve tried to make the edge culling loop as efficient as possible. There’s probably lots of room for improvement though.
I know the “format” command is very slow, but I left it in there for now.
[color=white][font=Verdana]there are several comments regarding your code…
-
you have a memory leaking because of using large arrays. you are right
-
one two-dimensional array leaks more then two one-dimensional arrays
-
array of arrays (or structures) leaks more then array of point4, or point3, or point2. if you can replace array with pointN – do it!
-
[color=yellow]append and deleteitem don’t leak themselves. Only the fact of growing of an array leaks. So if you have loop with serious array growing and shrink do [/color]gc light:on every N iterations.
it slows down but … it’s a right price for the keeping memory out of leaking. -
[color=yellow]undo of in your case doesn’t help[/color]
-
[color=yellow]getVert for editable_mesh works faster and causes less memory leaking then polyop.getvert [/color]
…
6. there is no too much to optimize in your code but here is my version …
[/font][/color]
fn CIC t1 t2 t3=
(
epsilon = 0.000001d0
test1 = abs (t1.y-t2.y)
test2 = abs (t2.y-t3.y)
if (test1 < epsilon) and (test2 < epsilon) then undefined
else
(
xc = 0d0
yc = 0d0
radius = 0d0
if (test1 < epsilon) then
(
m2 = -(t3.x-t2.x)/(t3.y-t2.y)
mx2 = (t2.x+t3.x)/2
my2 = (t2.y+t3.y)/2
xc = (t2.x+t1.x)/2
yc = m2*(xc-mx2) + my2
)
else
if (test2 < epsilon) then
(
m1 = -(t2.x-t1.x)/(t2.y-t1.y)
mx1 = (t1.x+t2.x)/2
my1 = (t1.y+t2.y)/2
xc = (t3.x+t2.x)/2
yc = m1*(xc-mx1) + my1
)
else
(
m1 = -(t2.x-t1.x)/(t2.y-t1.y)
m2 = -(t3.x-t2.x)/(t3.y-t2.y)
mx1 = (t1.x+t2.x)/2
mx2 = (t2.x+t3.x)/2
my1 = (t1.y+t2.y)/2
my2 = (t2.y+t3.y)/2
xc = (m1*mx1 - m2*mx2 + my2 - my1)/(m1-m2)
yc = if (test1 > test2) then (m1*(xc-mx1) + my1) else (m2*(xc-mx2) + my2)
)
radius = distance [t2.x,t2.y] [xc,yc]
[xc,yc,radius,radius+xc] -- point4 and precalculated [1] + [3]
)
)
fn doIT node:selection[1] = if iskindof node GeometryClass and (pmesh = copy node.mesh).numverts >= 3 do
(
fn compareP p1 p2 = if (d = p1.x - p2.x) > 0 then 1 else if d < 0 then -1 else 0
s_time = timestamp()
h_free = heapfree
verts = pmesh.verts as bitarray
points = for v in verts collect ((getVert pmesh v)*node.objecttransform)
qsort points compareP
size = node.max - node.min
avg = (node.max + node.min)/2
maxl = size.x + size.y
append points [avg.x+maxl,avg.y,0.]
append points [avg.x-(maxl/2),avg.y+maxl,0]
append points [avg.x-(maxl/2),avg.y-maxl,0]
tri_final = #()
init_face = [points.count-2,points.count-1,points.count]
-- two arrays of point3
tri_buffer = #(CIC points[init_face.x] points[init_face.y] points[init_face.z])
ini_buffer = #(init_face)
count = ini_buffer.count --Face buffer count
for i=1 to points.count do
(
this_p = points[i]
edges = #()
for j=count to 1 by -1 do
(
face = ini_buffer[j]
thisCIC = tri_buffer[j]
if thisCIC != undefined do
(
if (distance [this_p.x,this_p.y] [thisCIC[1],thisCIC[2]]) < thisCIC[3] then
(
append edges [face.x,face.y] -- point2
append edges [face.y,face.z] -- point2
append edges [face.z,face.x] -- point2
deleteItem tri_buffer j
deleteItem ini_buffer j
count -= 1
)
else if thisCIC[4] < this_p.x then
(
append tri_final face
deleteItem tri_buffer j
deleteItem ini_buffer j
count -= 1
)
)
)
for e in edges where e.x > e.y do swap e.x e.y -- some optimization for replacing amin/amax
e1 = edges.count
while e1 > 1 do
(
e2 = e1-1
while e2 > 0 do if edges[e1] == edges[e2] then
(
deleteItem edges e1
deleteItem edges e2
e1 -= 1
e2 = 0
)
else e2 -= 1
e1 -= 1
)
for e in edges do
(
append ini_buffer [e[1],e[2],i]
append tri_buffer (CIC points[e[1]] points[e[2]] this_p)
)
count += edges.count
if mod i 1000 == 0 do gc light:on -- gc for every 1000th iteration
)
for i=count to 1 by -1 do append tri_final ini_buffer[i]
format "---- Secs: % Leak:%
" ((timestamp()-s_time)/1000.) (h_free - heapfree)
new_obj = mesh vertices:points faces:tri_final wirecolor:yellow
gc light:on
new_obj
)
doIT()
it’s a little faster and leaks less…
Why, thank you good sir! Some excellent ideas there, and I really appreciate you taking the time to go through it so thoroughly.
I’m currently at home, so I can only test on MAX 2008, where the code seems to have no (or hardly no) leak at all. I suspect that will perhaps change with 2009, but I can see that it should be less than before. Hopefully it will be enough that I can accomplish the task at hand
I can hardly blame you for breaking a couple of things, considering how the code was thrown at you without context, but for correctness sake:
-A minor point; the main loop should not include the 3 vertices of the initializing triangle, which is why I preserved the count in a variable. Of course “1 to (points.count-3)” will work just as good.
-More important; the order of edge-points must be preserved. This is partly the reason for my somewhat cumbersome solution for comparing/removing edges. This affects triangle normals and carries over to the circumcircle calculation, which needs counter-clockwise triangles to function correctly. It should still be possible to use point2 instead of array though, but stored to some intermediate variable or something.
This last error obviously breaks the correctness of the algorithm, but also leads to too few triangles being discarded, so the correct algorithm should run considerably faster than this version. Still, the algorithm is quite heavy and will take considerable time to run with the kind of data I’m looking at. Someone should make a proper triangulation tool with the SDK, or better yet, functionality like this should be incorporated into existing MAX objects, like Editable Poly/Mesh or ProOptimizer.
Thank you once again, great stuff!
to keep edge’s vert order make a change in code:
...
-- for e in edges where e.x > e.y do swap e.x e.y
e1 = edges.count
while e1 > 1 do
(
e2 = e1-1
e = edges[e1]
-- while e2 > 0 do if edges[e1] == edges[e2] then
while e2 > 0 do if e == edges[e2] or [e.y,e.x] == edges[e2] then
...
Yes, thanks again. My testing suggests there is still something fishy going on with the triangle culling (the removal of triangles if center+radius<current.x), but I can’t nail it down at the moment. In the mentioned case of a 100×100 plane, the number of triangles still considered shouldn’t really rise above approx 2-400, but currently seem to increase steadily to several thousand.
On another sidenote, the reason I went with editable poly to start with was that I was hoping to develop the algorithm into handling constraints. More precisely, I was hoping to use the border/edge selection available in editable poly to constrain the algorithm in order to fill in gaps between meshes. The cap function (or converting double, closed splines to editable poly) results in hopeless triangulation. The terrain object does better, but can’t be constrained, so it requires a lot of extra work for this purpose. However, for now, I’d be happy to just be able to properly triangulate a set of points
I have nothing to say about the algorithm. I didn’t look deep in it. But my result mesh is exactly the same as from your code. Number of verts, faces is the same, vert order for faces is the same. Only one thing is different – vert positions. If you are using BaseObject’s vert position you have to multiply it with object’s objectstransform to get the right value.
oops. i’m wrong. number of faces are bigger in my case. I will fix it. Wait.