toggle quoted messageShow quoted text
Here are the definitions copied from my iOS SDK headers. The types/sizes for CGFloat and NS[U]Integer track the 32-/64-bit architecture used to build a particular executable.
typedef CGFLOAT_TYPE CGFloat;
#if defined(__LP64__) && __LP64__
# define CGFLOAT_TYPE double
# define CGFLOAT_TYPE float
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
typedef int NSInteger;
typedef unsigned int NSUInteger;
Regarding NSPoint, I see it defined as
typedef CGPoint NSPoint;
so its x and y fields will be CGFloats.
On Aug 31, 2017, at 12:19 , Jens Alfke <jens@...
I don't see how they could be incompatible if they're the same size.
I don’t remember the exact details either, but one issue is that there are places in the Obj-C runtime/metadata/whatever where the @encode string of a method signature matters, and some similar same-size, same-representation types are represented by different letters in the string. This could mean that although method parameters and return values were bit-for-bit identical, the type encoding made the methods incompatible.
One particular example I remember, unrelated to this thread’s topic, is that in 64-bit, NSUInteger encoded to “Q” (quad-word, i.e. long long) not “L” (long). Both types were 8 bytes, and NSUInteger was even typedef’ed to long (IIRC), but “L” was never used in that architecture.
The NSPoint/CGPoint thing was something like that. IIRC, NSPoint used float, and CGPoint used CGFloat, and those @encoded differently for some reason.