Back to articles
Why 0.1 + 0.2 != 0.3 in C? Understanding Floating Point Numbers

Why 0.1 + 0.2 != 0.3 in C? Understanding Floating Point Numbers

via Dev.tohassaan-syed

HISTORY:IEEE 754 was invented in 1985 to standardize floating-point arithmetic, ensuring consistent, reliable, and portable numerical results across different computer hardware. #include <stdio.h> int main () { float a = 0 . 1 ; float b = 0 . 2 ; if ( a + b == 0 . 3 ) printf ( "Equal \n " ); else printf ( "Not Equal \n " ); return 0 ; } WHAT DO YOU EXPECT? Eqaul or Not Equal,If Equal then your wrong because, computer dont know the . as a decimal so for that we usedb IEE754 standard for better uderstanding the real problem let us do by example. Computers don’t understand numbers like we do. We use decimal (base 10): 0.1 → one tenth But computers use binary (base 2): 0s and 1s only Now here’s the catch: Some decimal numbers cannot be represented exactly in binary Just like: 1/3 = 0.3333... (infinite in decimal) Similarly: 0.1 in binary = 0.0001100110011... (infinite) then, What is actaully stored in computer like eg : float 0.1 it stores the nearest approx value like : 0.1000000014901161

Continue reading on Dev.to

Opens in a new tab

Read Full Article
0 views

Related Articles