num2str changes value?

If I use num2str(0.123456789) to output this number as a string, it truncates the value to 0.12346. Is there a way to prevent this and create a string that represents the exact variable value?
Although it is somewhat less convenient, you can get whatever format and precision you want using
sprintf stringName, formatStr [,parameter]...
Hmm...will try. I'm not very familiar with that command.

Is there a way to determine the precision of a variable? In trying this out, unless the exact precision is issued then it also may change the value. For example, in the following uses I get these outputs:

sprintf myString "%f" 0.123456789 //myString becomes 0.1234567

sprintf myString "%.*f" 20, 0.123456789 //myString becomes 0.12345678899999999734

Is there a way to have sprintf output a string that exactly represents the variable?
This will output a number with a defined precision after decimal ...
Function/S Num2StrF(num,pad)
    variable num, pad
   
    string fstr
    sprintf fstr, "%.*f", pad,num
   
    return fstr
end


print num2strf(0.123456789,3) --> 0.123
print num2strf(333.123456789,3) --> 333.123

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAHuntsville
My problem is that by setting the significant digits to a fixed number the value will be either truncated or changed. I need to avoid any truncation or rounding of the value when converting it to a string. If I use a very large number for the significant digits length such as a length of 20 as I demonstrated above, then the value is also changed.

I need some way to avoid this, and preserve the exact value as a string, regardless of the value's size or precision.
It's also odd to me that the sprintf command will change the value in the way I mentioned above by using a large number for the number of decimal places. This seems like a bug to me (I'm using the Mac version of Igor 6.31).
Double-precision numbers have about 16 decimal digits of precision. You can ask sprintf to print more but any additional digits have no meaning. The behavior if Igor's sprintf is based on the behavior of the C language sprintf. In fact Igor mostly just calls the C sprintf.

To see the value of a variable with full precision use "%.16g" with sprintf. Leading zeros are suppressed. To see them use "%016g".

Here is a demonstration:
Printf "%.16g\r", PI  // Prints  3.141592653589793
Variable myPI = 3.141592653589793
Print/D PI - myPI    // Prints 0


If you are printing to the history you can use Print/D which uses "%.15g" internally. This is handy during debugging. But it appears to me from the demonstration above that "%.16g" gives a slightly more accurate result than "%.15g".

Somewhat related, here is a good article on floating point numbers: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
[quote=hrodstein]Double-precision numbers have about 16 decimal digits of precision. You can ask sprintf to print more but any additional digits have no meaning. The behavior if Igor's sprintf is based on the behavior of the C language sprintf. In fact Igor mostly just calls the C sprintf.

To see the value of a variable with full precision use "%.16g" with sprintf. Leading zeros are suppressed. To see them use "%016g".

Here is a demonstration:
Printf "%.16g\r", PI  // Prints  3.141592653589793
Variable myPI = 3.141592653589793
Print/D PI - myPI    // Prints 0


If you are printing to the history you can use Print/D which uses "%.15g" internally. This is handy during debugging. But it appears to me from the demonstration above that "%.16g" gives a slightly more accurate result than "%.15g".

Somewhat related, here is a good article on floating point numbers: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html[/quote]

Perfect! That works great! I used the 16-place specification along with jjweimer's suggestion to put it in a separate function, and its working nicely. A great substitute for a more precise version of the num2str function.

Thanks everyone.
tkessler wrote:
My problem is that by setting the significant digits to a fixed number the value will be either truncated or changed. I need to avoid any truncation or rounding of the value when converting it to a string. If I use a very large number for the significant digits length such as a length of 20 as I demonstrated above, then the value is also changed.

I need some way to avoid this, and preserve the exact value as a string, regardless of the value's size or precision.


You seem to have resolved your display issue, but just to correct this point (if Howard's link didn't clear it up)

The value of your number did get changed, but it wasn't sprintf that changed it. It happened when your base 10 description of the number (0.1234567) gets rounded to the nearest 53 digit base 2 number for memory storage / operations / calculations etc. You'll see this with any equivalent sprintf function (often also called sprintf) on any software package with any computer, and it even shows up with "simple" numbers like 0.1.
ikonen wrote:
tkessler wrote:
My problem is that by setting the significant digits to a fixed number the value will be either truncated or changed. I need to avoid any truncation or rounding of the value when converting it to a string. If I use a very large number for the significant digits length such as a length of 20 as I demonstrated above, then the value is also changed.

I need some way to avoid this, and preserve the exact value as a string, regardless of the value's size or precision.


You seem to have resolved your display issue, but just to correct this point (if Howard's link didn't clear it up)

The value of your number did get changed, but it wasn't sprintf that changed it. It happened when your base 10 description of the number (0.1234567) gets rounded to the nearest 53 digit base 2 number for memory storage / operations / calculations etc. You'll see this with any equivalent sprintf function (often also called sprintf) on any software package with any computer, and it even shows up with "simple" numbers like 0.1.


The main issue was the rounding/truncation that occurred with too few significant digits with the num2str command. I agree that the number size will be limited to the bitness of the variable (ie, a double-precision being limited to a 64-bit precision of sorts); however, with num2str only having 5 or so significant digits, when using this to manage calculations it resulted in errors.

For instance, I've been saving wave scaling factors as metadata notes for each wave, but in doing so with num2str the saved scaling factors sometimes were off from the ones used to multiply the wave, so to undo the scaling by retrieving the note and dividing the wave by this number, the wave's values were off from the original. This would result in an original wave of values like 1,2,3,4 and 5, ending up being 1.00034, 2.00068, 3.00102, etc., once scaling was reverted.

By storing and retrieving the scaling factor using the highest significant digits I can, it eliminates this error and returns the scaled wave to its original values (or at least with a significance so precise that Igor does not show a difference even when using print/D to sample the stored values in the wave).