Hi again,
I run into a problem. I’ve noticed that I can’t have more than 500 files open at the same time. Is there any way I could increase this number?
Thanks,
Nick
In 2014, you can increase to 2048.
It is more of windows limitation.
This is from manual.
<int>systemTools.setmaxstdio <int newmax>
in 3ds Max 2014: Sets the maximum number of simultaneously open files at the stdio level (i.e. via fopen).
The valid range for the newmax argument is between 512 and 2048.
Returns the new maximum value if successful.
Currently, only increasing the value is allowed.
Returns -1 if attempting to reduce the maximum count.
This could be used for example to allow more Point Cache files to be read in parallel at the same time.
I’m saving animation data for a batch of objects. Since it’s animation, I’m saving for every frame one entry per object that’s why it was easier to have all the files open and just add the data without opening/closing the file. So everything was fine until I tried a bigger scene and I realized that there is a limit at 500 open files.
Now I’m changing the code in order to collect all data in memory and save them in files but it’s a bit slower.
[edit]
sorry I forgot to tell you what happens when you exceed 500. The fopen returns undefined
Sounds like a mess to me. You’re saving 48 bytes (one matrix3) of data per file.
If you have 500 objects and 500 frames, you’ll end with a whopping 250,000 tiny files, which windows might not even support if they are in the same folder.
It would make much more sense to save the data to one file per object, or even one file for the entire cache, which for the same scene would only be 12MB.
I suppose I didn’t make myself clear, sorry for that.
I’m saving one file per object. In that one file I store all the animation for this object.
My problem was the following
old code
filesAr = for obj in objsAr collect ( --new file )
for t=0 to 100 do (
for o=1 to objsAr.count do (
--Add data to the filesAr [o] from the objsAr[o]
)
)
for f in filesAr do ( -- close f )
The problem since I couldn’t have a lot of open files
for t=0 to 100 do (
for obj in objsAr do (
--open file for obj
--add data
--close in order to add data to the other
)
)
My solution
valuesAr = for o in objsAr collect #()
for t=0 to 100 do (
for o=1 to objsAr.count do (
--Add data to the valuesAr[o] from the objsAr[o]
)
)
for v in valuesAr do ( --Save values per file )
What about
for obj in objsAr do (
--open file for obj
for t=0 to 100 do (
--add data
)
--close
)
The problem with this version, is that max has to calculate the whole scene for every object which is dead slow.
from my test, the cost of a time context switch is only 30% more than the unswitched version, which is probably negligible compared to the times of file writing.
obj = $
ts = timestamp()
for i = 1 to 1000000 do
(
local a = obj.transform
)
print (timestamp()-ts)
ts = timestamp()
for i = 1 to 1000000 do
(
at time i (local a = obj.transform)
)
print (timestamp()-ts)
430ms vs 566ms
I’ve tested the following code, to a relatively small production scene with two rigged tanks from our latest job.
(
objs = for o in objects where not o.isHidden collect o
format "[Benchmark test]
"
format "- objects tested: %
" objs.count
format "- start frame: %
" animationRange.start
format "- end frame: %
" animationRange.end
ts = timestamp()
for o in objs do (
for t=animationRange.start to animationRange.end do at time t ( o.transform )
)
format "- first method: %s
" ((timestamp()-ts)*.001)
ts = timestamp()
for t=animationRange.start to animationRange.end do at time t (
for o in objs do ( o.transform )
)
format "- second methods: %s
" ((timestamp()-ts)*.001)
)
And this is the result…
[Benchmark test]
- objects tested: 529
- start frame: 0f
- end frame: 45f
- first method: 1574.642s
- second methods: 8.648s
And that’s the small scene with just 529 objects and 46frames. The big scene I’m doing the test, has about 8600 objects and 115frames.
Must be something to do with your controllers.
- Are they history dependent?
- Do you have any time change callbacks in the scene?
- Can you recreate the slowdown in a neutral test?
I tried this:
(
delete objects
objs = for i = 1 to 1000 collect
(
local o = teapot()
for t = animationRange.start to animationRange.end do
(
with animate on
(
at time t
(
o.position = random [-100,-100,-100] [100,100,100]
)
)
)
o
)
format "[Benchmark test]
"
format "- objects tested: %
" objs.count
format "- start frame: %
" animationRange.start
format "- end frame: %
" animationRange.end
ts = timestamp()
for o in objs do (
for t=animationRange.start to animationRange.end do at time t ( o.transform )
)
format "- first method: %s
" ((timestamp()-ts)*.001)
ts = timestamp()
for t=animationRange.start to animationRange.end do at time t (
for o in objs do ( o.transform )
)
format "- second methods: %s
" ((timestamp()-ts)*.001)
)
And got this:
[Benchmark test]
- objects tested: 1000
- start frame: 0f
- end frame: 100f
- first method: 0.109s
- second methods: 0.102s
Yes, I suppose it’s the controllers but I most of the rigs are quite complex. There are no time change callbacks in the scene but I’m afraid in order to recreate a scene, it should be a production ready scene and it needs time. I’ll give it a go but I can’t spend much time on it. If I manage to recreate it, I’ll post the scene.