Consider the following C/C++ code
#include <stdint.h>
int test_this() {
return (uint16_t) 1 >= (int16_t) -1;
}
And now this code
#include <stdint.h>
int test_this() {
return (uint32_t) 1 >= (int32_t) -1;
}
The only difference is the uint32_t and int32_t.
Assembly code
The 16 bit variant generates the following assembly code
test_this():
push rbp
mov rbp, rsp
mov eax, 1
pop rbp
ret
The 32 bit variant generates the following assembly code
test_this():
push rbp
mov rbp, rsp
mov eax, 0
pop rbp
ret
As you can see, we have a different outcome based on whether we used 16 or 32 bit.
Mind, all of the code is compiled using clang/gcc/msvc -O0 in the most recent version on x86/arm/aarch etc.
You can try it on godbolt.org.
A possible solution
In the 16 bit variant both of the comparison instructions are promoted to 32 bit signed integers and thats why it returns 1.
On the 32 bit variant the signed type is reinterpreted as an unsigned integer, thus being larger in value.