双精度在不同语言中是不同的
我正在尝试各种编程语言中双精度值的精度。
我的程序
main.c
#include <stdio.h>
int main() {
for (double i = 0.0; i < 3; i = i + 0.1) {
printf("%.17lf\n", i);
}
return 0;
}
主要.cpp
#include <iostream>
using namespace std;
int main() {
cout.precision(17);
for (double i = 0.0; i < 3; i = i + 0.1) {
cout << fixed << i << endl;
}
return 0;
}
main.py
i = 0.0
while i < 3:
print(i)
i = i + 0.1
主要.java
public class Main {
public static void main(String[] args) {
for (double i = 0.0; i < 3; i = i + 0.1) {
System.out.println(i);
}
}
}
输出
main.c
0.00000000000000000
0.10000000000000001
0.20000000000000001
0.30000000000000004
0.40000000000000002
0.50000000000000000
0.59999999999999998
0.69999999999999996
0.79999999999999993
0.89999999999999991
0.99999999999999989
1.09999999999999990
1.20000000000000000
1.30000000000000000
1.40000000000000010
1.50000000000000020
1.60000000000000030
1.70000000000000040
1.80000000000000050
1.90000000000000060
2.00000000000000040
2.10000000000000050
2.20000000000000060
2.30000000000000070
2.40000000000000080
2.50000000000000090
2.60000000000000100
2.70000000000000110
2.80000000000000120
2.90000000000000120
主要.cpp
0.00000000000000000
0.10000000000000001
0.20000000000000001
0.30000000000000004
0.40000000000000002
0.50000000000000000
0.59999999999999998
0.69999999999999996
0.79999999999999993
0.89999999999999991
0.99999999999999989
1.09999999999999987
1.19999999999999996
1.30000000000000004
1.40000000000000013
1.50000000000000022
1.60000000000000031
1.70000000000000040
1.80000000000000049
1.90000000000000058
2.00000000000000044
2.10000000000000053
2.20000000000000062
2.30000000000000071
2.40000000000000080
2.50000000000000089
2.60000000000000098
2.70000000000000107
2.80000000000000115
2.90000000000000124
main.py
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999
1.2
1.3
1.4000000000000001
1.5000000000000002
1.6000000000000003
1.7000000000000004
1.8000000000000005
1.9000000000000006
2.0000000000000004
2.1000000000000005
2.2000000000000006
2.3000000000000007
2.400000000000001
2.500000000000001
2.600000000000001
2.700000000000001
2.800000000000001
2.9000000000000012
主要.java
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6
0.7
0.7999999999999999
0.8999999999999999
0.9999999999999999
1.0999999999999999
1.2
1.3
1.4000000000000001
1.5000000000000002
1.6000000000000003
1.7000000000000004
1.8000000000000005
1.9000000000000006
2.0000000000000004
2.1000000000000005
2.2000000000000006
2.3000000000000007
2.400000000000001
2.500000000000001
2.600000000000001
2.700000000000001
2.800000000000001
2.9000000000000012
我的问题
我知道类型本身存在一些错误,我们可以从博客中了解更多信息,例如为什么您应该永远不要使用浮点数和双精度进行货币计算,以及每个计算机科学家都应该知道的浮点算术。double
但这些错误不是随机的!每次错误都是相同的,因此我的问题是为什么这些错误对于不同的编程语言是不同的?
其次,为什么Java和Python中的精度错误是相同的?[Java的JVM是用C++编写的,而python解释器是用C语言编写的]
但令人惊讶的是,它们的错误是相同的,但与C和C++中的错误不同。为什么会发生这种情况?