Notifications
Clear all

[Closed] Need some point caching advice

I’m working on something that outputs a lot of particles (think SPH solver) and they need to be stored in a file per frame.

I could write my own thing but somehow looping over 25k+ location from a file and adding them to a particle system via maxscript is going to be slow. So I was hoping to export to some sort of format max could read natively (or a free 3th partly loader) and turn it into a mesh (that contains verts only) or directly into particle flow. Particle count isn’t constant so abusing the normal Point Cache system isn’t going to work.

Any tips would be welcome!

10 Replies
2 Replies
(@denist)
Joined: 2 years ago

Posts: 0

the first idea that occurs to me is to use particle Position or Birth script and read some binary file for locations. we can keep the file open and i think that it will pretty fast to read a data.

(@denist)
Joined: 2 years ago

Posts: 0

that looks fast enough for me:

(
	b = fopen @"C:	emp\binary.bin" "wb"
	t1 = timestamp()
	data = for k=1 to 25000 collect 
	(
		p = random x_axis y_axis
		for i=1 to 3 do writefloat b p[1]
		p
	)
	format "write data count:% time:%
" data.count (timestamp() - t1)
	fflush b
	fclose b
)
(
	b = fopen @"C:	emp\binary.bin" "rb"
	t1 = timestamp()
	data = #()
	p = [0,0,0]
	i = 0
	while (d = readfloat b) != undefined do 
	(
		p[i += 1] = d
		if i == 3 do
		(
			append data p
			i = 0
		)
	)
	format "read data count:% time:%
" data.count (timestamp() - t1)
	fflush b
	fclose b
)

write/read 25,000 point3 value:
write data count:25000 time:99
read data count:25000 time:189

 lo1

I believe the most performant solution, thought definitely not the easiest, would be to implement ThinkBox’s PRT file format for your exporter using this specification: http://www.thinkboxsoftware.com/krak-prt-11-file-format/

You can then load it using the unlicensed version of Krakatoa (at least I think they still allow you to do that)

 lo1
(
	b = fopen @"C:\binary.bin" "wb"
	t1 = timestamp()
	WriteLong b 25000 
	data = for k=1 to 25000 collect 
	(
		p = random x_axis y_axis
		for i=1 to 3 do writefloat b p[i]
		p
	)
	format "write data count:% time:%
" data.count (timestamp() - t1)
	fflush b
	fclose b
)
(
	b = fopen @"C:\binary.bin" "rb"
	t1 = timestamp()
	data = #()
	local c = readlong b
	for i = 1 to c do
	(
		append data [readfloat b, readfloat b, readfloat b]
	)
	format "read data count:% time:%
" data.count (timestamp() - t1)
	fflush b
	fclose b
)

This is much faster for read

 lo1
(
	b = fopen @"C:\binary.bin" "wb"
	t1 = timestamp()
	WriteLong b 250000 
	data = for k=1 to 250000 collect 
	(
		p = random x_axis y_axis
		writefloat b p[1]
		writefloat b p[2]
		writefloat b p[3]
		p
	)
	format "write data count:% time:%
" data.count (timestamp() - t1)
	fflush b
	fclose b
)

and unrolling the i = 1 to 3 loop makes the write faster as well.

(
 	b = fopen @"C:\binary.bin" "wb"
 	t1 = timestamp()
 	WriteLong b 250000 
 	data = for k=1 to 250000 collect 
 	(
 		p = random x_axis y_axis
 		writefloat b p.x
 		writefloat b p.y
 		writefloat b p.z
 		p
 	)
 	format "write data count:% time:%
" data.count (timestamp() - t1)
 	fflush b
 	fclose b
 )

makes the write a little faster too

if we really know what we read the reading is much faster:


  (
  	b = fopen @"C:	emp\binary.bin" "rb"
  	t1 = timestamp()
  	data = #()
  	while (d = readfloat b) != undefined do append data [d, readfloat b, readfloat b]
  	
  	format "read data count:% time:% %
" data.count (timestamp() - t1)
  	fflush b
  	fclose b
  )
  

read data count:25000 time:56

so as we can see 25,000 points cloud is not a problem to read from file

2 Replies
 lo1
(@lo1)
Joined: 2 years ago

Posts: 0

Is this faster for you than my code? For me its slower

(@denist)
Joined: 2 years ago

Posts: 0

no. yours is faster… because you know better what you are reading

Awesome! I missed the whole binary option and got discouraged thinking of parsing strings to floats etc

Thx guys!