[Closed] 16bit channel Bitmaps, and decimal numbers
While working with bitmaps in MaxScript using commands such as getpixels and setpixels, images can have 16 bits per channel, but commands you use must be represented in the (color 0 0 0 0) – (color 255 255 255 255) numbering format, with the ability to access higher precision using decimals. Aside from the minor annoyance of things getting complicated when you want to work with 256 integer values and you may get decimals that don’t automatically line up, now I have a case where I actually want to take advantage of the 16bit depth in an accurate data map. The problem is with having to operate using decimals when they’re being converted back and forth between 1/65535 values. I haven’t found a reliable way to determine what exact decimal number a pixel is going to be rounded to , as the spacing between decimal values vary depending on which inter value it corresponds to. To give an example, say I wanted to increment a color value by the very smallest unit. The best I can do is add by 0.00389099, or start with an integer that is multiplied by that value before output, but the discrepancy between this method and the decimal it will round to is unpredictable. The spacing is not consistent. For example it would be possible to barely miss the cutoff, resulting in getting the same pixel value twice in a row. Anyway, I’m just wondering if there’s an easy and reliable way to work with the 16bit depth that would save me a lot of hairy multiplying and converting garbage.