The reason you see this when using more than 15 decimal places is that computers can't actually represent the value 0.7 exactly. This is because all numbers are in binary format (i.e. each digit is either a 1 or a 0). It's the same as the reason why it's impossible to represent the fraction 1/3 exactly in base 10. This behaviour occurs on all CPU architectures, operating systems, and applications.
When you *do* see the value 0.7, it's because the routine that converts the number from floating point to text for display on screen does so only to a certain number of decimal places. Once you go less than 15 decimal places, the truncation causes it to come 0.70000000000000, which is then converted to 0.7 for display.
See https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf for more details
The alternative is to use fixed point arithmetic, which is where you essentially represent numbers in smaller units, for example using cents in terms of dollars, if you're dealing with financial data. Banks and other financial institutions use this because money has to be represented as a whole number of cents. Just changing the display format as I've described above, because you can still get calculation errors. For example if you start with 0 and 0.1 to it 1,000,000 times, the result will be 100,000.000001 - not what you'd expect.