Notifications
Clear all

[Closed] Transforming vertex to camera space

Hi,
I’m trying to draw single pixels on a bitmap where object’s verts will appear in rendered image. However they aren’t exactly where they should be. You can see my bitmap composed over scanline rendering here: scaleBox.jpg. It looks as if there was slight difference in scale but I have no idea what causes this.

This is source code of my script
myPic = bitmap 640 480
    myMesh = $Box01
    myCam = $Camera01
    pixPos = [0,0]
    
    for i in 1 to (getNumVerts myMesh) do
    	(
    	v = getVert myMesh i
    	worldv = v
    	
    	camv = worldv * (inverse myCam.transform)
    	
    	pixPos.x = (atan (camv.x / -camv.z)) / (myCam.fov/2.0) * (myPic.Width / 2.0) + (myPic.Width / 2.0)
  	pixPos.y = (atan (-camv.y / -camv.z)) / ((myCam.fov/2.0) * (myPic.Height / (myPic.Width as float))) * (myPic.Height / 2.0) + (myPic.Height / 2.0)
    	
    	picColor = #(color 255 255 255)
    	setPixels myPic [pixPos.x as integer, pixPos.y as integer] picColor
    	)
    
    display myPic

There’s propably something wrong with the way I calculate pixPos but I have no idea what. Can anyone help me to make this work right?

2 Replies

Check out the example in the maxscript reference:
“How To … Develop a Vertex Renderer”

Cheers,
Martijn

Thanks, Martijn.
I also found another example in maxsdk help.

regards