I have traditionally avoided floating point arithmetic because it is a royal pain to work with. Those who have know what I am talking about. What is much more frustrating is if users see floating-point rounding as some sort of fatal error. Take, for instance, 6.934376572E-150. Programmers and people who understand the limitations of floating-point know that a number like that is simply 0. To an accountant or banker or a regular PC user, that number has an E in it, so it must be an error. What could I possibly have done wrong this time?
Making sure your users do not see floating point errors is a huge issue and difficult to solve. One user will want two digits of precision all the time, others won't care if zeroes are chopped off, and still others want to have the option to see the floating-point errors. Given that most users (80/20 rule of thumb) will want zeroes chopped off, a simple bit of code in C/C++ to convert a floating-point number to a truncated floating-point number is easy. Since very few people will need more than 5 to 7 digits of precision, a format string might look like %.5f or for doubles, %.5lf. Then, your zero will be '0.00000'. However, it is necessary to make it look "pretty" by chopping off trailing zeroes in the string after a decimal point. Then, if nothing comes after the decimal point, chop that off too. This gets a nice even '0' to display to the user.
That is a start anyway. Floating-point is pretty evil stuff when you start getting into the nitty-gritty. Many beginning programmers fall into the same pit over and over and over in this regard - especially when trying to equate two values. You have to define a range in which to compare, but that means the entire numerical space to work with is reduced and accuracy is diminished greatly. Of course, if accuracy is an issue, there are a few good arbitrary precision math libraries out there to run calculations through. A few scripting languages have them built in as well.