rpav / cl-autowrap

(c-include "file.h") => complete FFI wrapper
BSD 2-Clause "Simplified" License
218 stars 41 forks source link

Bindings not created in SBCL 1.5 #93

Open patrickmay opened 5 years ago

patrickmay commented 5 years ago

On Mac Mojave 10.14.2 using SBCL 1.5, the following code fails:

(ql:quickload :cl-autowrap)

(defpackage :kafka-autowrap-ffi
  (:use :common-lisp)
  (:export #:rd-kafka-t
           #:rd-kafka-version
           #:rd-kafka-metadata))

(in-package :kafka-autowrap-ffi)

(autowrap:c-include "rdkafka.h")

(cffi:define-foreign-library rdkafka
  (t (:default "librdkafka")))

(cffi:use-foreign-library rdkafka)

(rd-kafka-version)

This results in:

* RDKAFKA
* #<CFFI:FOREIGN-LIBRARY RDKAFKA "librdkafka.dylib">
* ; in: RD-KAFKA-VERSION
;     (KAFKA-AUTOWRAP-FFI:RD-KAFKA-VERSION)
;
; caught STYLE-WARNING:
;   undefined function: KAFKA-AUTOWRAP-FFI:RD-KAFKA-VERSION
;
; compilation unit finished
;   Undefined function:
;     RD-KAFKA-VERSION
;   caught 1 STYLE-WARNING condition

debugger invoked on a UNDEFINED-FUNCTION in thread
#<THREAD "main thread" RUNNING {10005205B3}>:
  The function KAFKA-AUTOWRAP-FFI:RD-KAFKA-VERSION is undefined.

The .spec files are created by c-include, so that part is working. Is my code wrong or is there a known issue with this version of SBCL?

Thanks.

rpav commented 5 years ago

Does the spec have the function in question?

patrickmay commented 5 years ago

Yes:

{ "tag": "const", "name": "RD_KAFKA_VERSION", "ns": 0, "location": "/var/folders/b0/br9v722s5nq0j4m677ncd98c0000gr/T/tmpGHU3ALSV.tmp:8:12", "type": { "tag": ":long", "bit-size": 64, "bit-alignment": 64 } },
rpav commented 5 years ago

There appears to be no value tag, which seems odd because if this is the appropriate definition, I get one trivially copying it to a .h file, running c2ffi for macros.

Probably time to check the source and/or output to see what's going on.

patrickmay commented 5 years ago

Running c2ffi on rdkafka.h puts this out for rd_kafka_version:

{ "tag": "function", "name": "rd_kafka_version", "ns": 0, "location": "rdkafka.h:162:5", "variadic": false, "inline": false, "storage-class": "none", "parameters": [], "return-type": { "tag": ":int", "bit-size": 32, "bit-alignment": 32 } },

Where should I see a value tag?

rpav commented 5 years ago

This is the result I get:

[
{ "tag": "const", "name": "RD_KAFKA_VERSION", "ns": 0, "location": "test.m:2:9", "type": { "tag": ":long", "bit-size": 64, "bit-alignment": 64 }, "value": 16777471 }
]

Note I just made a single header with the following as per the source:

#define RD_KAFKA_VERSION 0x010000ff

Running c2ffi -M test.m --with-macro-defs test.h then c2ffi test.m gives this. Autowrap calls similarly for you, but debugging just with c2ffi is probably your best bet at this stage.

rpav commented 5 years ago

Note I get the same result using any of i386-pc-linux-gnu, i386-apple-darwin-machos, or x86_64-apple-darwin-machos, only differing by the expected size/alignment values.

rpav commented 5 years ago

Err actually I get warnings about unhandled triples for anything apple, and those are probably not correct triples. But the values still generate correctly. :P

patrickmay commented 5 years ago

Following your example with the -M flag, I get the same:

{ "tag": "const", "name": "RD_KAFKA_VERSION", "ns": 0, "location": "test.m:74:9", "type": { "tag": ":long", "bit-size": 64, "bit-alignment": 64 }, "value": 722687 }

(I also got the triples issue.)

So I should see that in my .spec files as well? I just deleted and regenerated them all -- none have the value field.

Could it be a change in SBCL 1.5?

rpav commented 5 years ago

You should; perhaps set *trace-c2ffi* or whatnot and look at the output and commands that are being run. Alternatively just run the -M version on your .h and see what happens too. While the trivial case should return the output as above (or this entire thing would otherwise not work!), specific C stuff often gets annoying ... required #defines, wrong include paths, etc.

rpav commented 5 years ago

SBCL 1.5 should have nothing to do with this, unless there's something obscure about the environment or that's changing how c2ffi is run, both of which seem unlikely.

patrickmay commented 5 years ago

I modified my code to enable trace-c2ffi:

(ql:quickload :cl-autowrap)

(setq autowrap:*trace-c2ffi* t)

(defpackage :kafka-autowrap-ffi
  (:use :common-lisp)
  (:export #:rd-kafka-t
           #:rd-kafka-version
           #:rd-kafka-metadata))

(in-package :kafka-autowrap-ffi)

(autowrap:c-include "rdkafka.h")

The output included lines like this:

; Invoking: c2ffi rdkafka.h -D null -M /var/folders/b0/br9v722s5nq0j4m677ncd98c0000gr/T/tmpGHU3ALSX.tmp -A x86_64-apple-darwin9
; Invoking: c2ffi /var/folders/b0/br9v722s5nq0j4m677ncd98c0000gr/T/tmpAAURSO1.tmp -o /Users/Patrick/projects/cl-wrap-kafka/autowrap/rdkafka.x86_64-apple-darwin9.spec -A x86_64-apple-darwin9

The generated macro file looks like this:

const long __c2ffi_RD_KAFKA_EVENT_DR = RD_KAFKA_EVENT_DR;
const long __c2ffi_RD_KAFKA_EVENT_NONE = RD_KAFKA_EVENT_NONE;
const long __c2ffi_RD_KAFKA_EVENT_REBALANCE = RD_KAFKA_EVENT_REBALANCE;
const long __c2ffi_RD_KAFKA_EVENT_ERROR = RD_KAFKA_EVENT_ERROR;
const long __c2ffi_RD_KAFKA_EVENT_LOG = RD_KAFKA_EVENT_LOG;
const long __c2ffi_RD_KAFKA_EVENT_FETCH = RD_KAFKA_EVENT_FETCH;
const long __c2ffi_RD_KAFKA_EVENT_STATS = RD_KAFKA_EVENT_STATS;
const long __c2ffi_RD_KAFKA_EVENT_CREATETOPICS_RESULT = RD_KAFKA_EVENT_CREATETOPICS_RESULT;
const long __c2ffi_RD_KAFKA_EVENT_ALTERCONFIGS_RESULT = RD_KAFKA_EVENT_ALTERCONFIGS_RESULT;
const long __c2ffi_RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT = RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT;
const long __c2ffi_RD_KAFKA_V_END = RD_KAFKA_V_END;
const long __c2ffi_RD_KAFKA_EVENT_DELETETOPICS_RESULT = RD_KAFKA_EVENT_DELETETOPICS_RESULT;
const long __c2ffi_RD_KAFKA_EVENT_OFFSET_COMMIT = RD_KAFKA_EVENT_OFFSET_COMMIT;
const long __c2ffi_RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT = RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT;
const long __c2ffi__RDKAFKA_H_ = _RDKAFKA_H_;
const long __c2ffi_RD_UNUSED = RD_UNUSED;
const long __c2ffi_RD_DEPRECATED = RD_DEPRECATED;
const long __c2ffi_RD_EXPORT = RD_EXPORT;
const long __c2ffi_RD_INLINE = RD_INLINE;
const long __c2ffi_LIBRDKAFKA_TYPECHECKS = LIBRDKAFKA_TYPECHECKS;
const long __c2ffi_RD_KAFKA_OFFSET_TAIL_BASE = RD_KAFKA_OFFSET_TAIL_BASE;
const long __c2ffi_RD_KAFKA_OFFSET_STORED = RD_KAFKA_OFFSET_STORED;
const long __c2ffi_RD_KAFKA_OFFSET_INVALID = RD_KAFKA_OFFSET_INVALID;
const long __c2ffi_RD_KAFKA_OFFSET_BEGINNING = RD_KAFKA_OFFSET_BEGINNING;
const long __c2ffi_RD_KAFKA_OFFSET_END = RD_KAFKA_OFFSET_END;
const long __c2ffi_RD_KAFKA_PARTITION_UA = RD_KAFKA_PARTITION_UA;
const long __c2ffi_RD_KAFKA_DESTROY_F_NO_CONSUMER_CLOSE = RD_KAFKA_DESTROY_F_NO_CONSUMER_CLOSE;
const long __c2ffi_RD_KAFKA_MSG_F_PARTITION = RD_KAFKA_MSG_F_PARTITION;
const long __c2ffi_RD_KAFKA_VERSION = RD_KAFKA_VERSION;
const long __c2ffi_RD_KAFKA_MSG_F_BLOCK = RD_KAFKA_MSG_F_BLOCK;
const long __c2ffi_RD_KAFKA_MSG_F_COPY = RD_KAFKA_MSG_F_COPY;
const long __c2ffi_RD_KAFKA_MSG_F_FREE = RD_KAFKA_MSG_F_FREE;
const char* __c2ffi_RD_KAFKA_DEBUG_CONTEXTS = RD_KAFKA_DEBUG_CONTEXTS;

The .spec files still don't contain the value field.

When I run c2ffi from the command line, my macro file contains the same lines, in a different order.

So I tried generating the macro files and specs myself with:

c2ffi -M arm-pc-linux-gnu.m --with-macro-defs -A arm-pc-linux-gnu rdkafka.h
c2ffi arm-pc-linux-gnu.m -o rdkafka.arm-pc-linux-gnu.spec -A arm-pc-linux-gnu
c2ffi -M i386-unknown-freebsd.m --with-macro-defs -A i386-unknown-freebsd rdkafka.h
c2ffi i386-unknown-freebsd.m -o rdkafka.i386-unknown-freebsd.spec -A i386-unknown-freebsd
c2ffi -M i386-unknown-openbsd.m --with-macro-defs -A i386-unknown-openbsd rdkafka.h
c2ffi i386-unknown-openbsd.m -o rdkafka.i386-unknown-openbsd.spec -A i386-unknown-openbsd
c2ffi -M i686-apple-darwin9.m --with-macro-defs -A i686-apple-darwin9 rdkafka.h
c2ffi i686-apple-darwin9.m -o rdkafka.i686-apple-darwin9.spec -A i686-apple-darwin9
c2ffi -M i686-pc-linux-gnu.m --with-macro-defs -A i686-pc-linux-gnu rdkafka.h
c2ffi i686-pc-linux-gnu.m -o rdkafka.i686-pc-linux-gnu.spec -A i686-pc-linux-gnu
c2ffi -M i686-pc-windows-msvc.m --with-macro-defs -A i686-pc-windows-msvc rdkafka.h
c2ffi i686-pc-windows-msvc.m -o rdkafka.i686-pc-windows-msvc.spec -A i686-pc-windows-msvc
c2ffi -M x86_64-apple-darwin9.m --with-macro-defs -A x86_64-apple-darwin9 rdkafka.h
c2ffi x86_64-apple-darwin9.m -o rdkafka.x86_64-apple-darwin9.spec -A x86_64-apple-darwin9
c2ffi -M x86_64-pc-linux-gnu.m --with-macro-defs -A x86_64-pc-linux-gnu rdkafka.h
c2ffi x86_64-pc-linux-gnu.m -o rdkafka.x86_64-pc-linux-gnu.spec -A x86_64-pc-linux-gnu
c2ffi -M x86_64-pc-windows-msvc.m --with-macro-defs -A x86_64-pc-windows-msvc rdkafka.h
c2ffi x86_64-pc-windows-msvc.m -o rdkafka.x86_64-pc-windows-msvc.spec -A x86_64-pc-windows-msvc
c2ffi -M x86_64-unknown-freebsd.m --with-macro-defs -A x86_64-unknown-freebsd rdkafka.h
c2ffi x86_64-unknown-freebsd.m -o rdkafka.x86_64-unknown-freebsd.spec -A x86_64-unknown-freebsd
c2ffi -M x86_64-unknown-openbsd.m --with-macro-defs -A x86_64-unknown-openbsd rdkafka.h
c2ffi x86_64-unknown-openbsd.m -o rdkafka.x86_64-unknown-openbsd.spec -A x86_64-unknown-openbsd

All of these spec files include the value tag. In this case, c-include didn't fire off c2ffi, but the symbols from rdkafka are still not found.

I appreciate your help. Any other suggestions or should I drop back to CFFI?

rpav commented 5 years ago

Probably the first thing to do is run the second command on the generated file, and possibly double check that file. It should include your original file and the new macro file; it's possible/probable it's not finding your .h (in which case set an include path).

Otherwise you should see any exact errors and output directly as autowrap would which may give further leads if necessary.

patrickmay commented 5 years ago

Your response gave me a hint. While the .spec files are the same, except for the value field, whether I generate from autowrap or the command line, the macro files are different. The macro files generated in /var/folders/... are 33 lines long and contain only lines beginning with "const long|char* c2ffi". The macro files generated from the command line are 132 lines long and have a bunch of defines at the top like this:

/* rdkafka.h:2562:9 */
#define RD_KAFKA_OFFSET_STORED -1000

So it appears that you're right about c2ffi not seeing something important. Which include path should I set?

rpav commented 5 years ago

Can you pastebin/gist the output .. including stderr.. of autowrap's c2ffi commands? These kinds of things are usually errors in the output near the top. If my C++ was less crap when I wrote c2ffi, there would probably be better handling, but alas. ;)

borodust commented 5 years ago

If that would be of any help, I use these routines to figure out include paths (GCC must be installed and properly configured):

(ql:quickload :claw)
(claw::dump-all-gcc-include-paths)
(claw::dump-all-darwin-framework-paths)
rpav commented 5 years ago

This won't matter since you're not passing them to autowrap; I'm guessing it's not finding your original .h in the second run.

Unfortunately there's not been a way to get "default paths" from clang, so c2ffi just makes some basic guesses and you have to specify the rest... including paths to your own headers, I think.

patrickmay commented 5 years ago

I didn't see how to get stderr from autowrap, so I ran the same c2ffi invocations from the command line. The first one is here: https://pastebin.com/2nL7x8TW and the second one, building from the generated .m file, is here: https://pastebin.com/M45DUUkA

There are a lot of "Skipping invalid Decl" in the second paste, which explains why no symbols are interned. The first one supports your suggestions, with "stdio.h not found". Should I set CPATH to the LLVM installation or GCC?

rpav commented 5 years ago

Basically you just need to add -i or -I to where your stdio.h and other relevant headers live until it stops complaining about missing headers (or you get all the decls you want).

Those can then be specified to autowrap.

patrickmay commented 5 years ago

That helped make some progress. When I run this:

c2ffi -i /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include rdkafka.h -D null -M ./apple.m -A x86_64-apple-darwin9

I get a macro file with 1245 lines and the only error output is:

c2ffi warning: Unhandled environment: '' for triple 'x86_64-apple-darwin9'

I get the same if I use i686-apple-darwin9.

Unfortunately, running the second invocation with the same include:

c2ffi -i /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include:. apple.m -o ./apple.spec -A x86_64-apple-darwin9 >& second-step

still doesn't result in the value field showing up in the .spec file.

I noticed that when I run c2ffi to get the instructions, it includes these lines:

      -A, --arch           Specify the target triple for LLVM
                           (default: x86_64-apple-darwin18.2.0)

I tried using that as the -A flag, but that didn't work either.

If other people are using this successfully on Mojave, whatever I'm doing wrong must be simple. I'm damned if I can figure it out, though.

patrickmay commented 5 years ago

I spun up an EC2 instance, installed LLVM, cmake, rdkafka, c2ffi, and SBCL, and tried it from there. It generated the .spec files with a bunch of warnings (see here: https://pastebin.com/KhZdwmr3) but the value tag was present. I copied those to my Mac and ran my script again, which generated a few less warnings (see here: https://pastebin.com/6rKWswWu) and when I ran (rd-kafka-version-str), it worked!

I'll still spend some time getting it to work on my Mac, but this is at least a workaround. Now I can spend some time figuring out why this typedef makes autowrap unhappy:

typedef struct rd_kafka_message_s {
    rd_kafka_resp_err_t err;   /**< Non-zero for error signaling. */
    rd_kafka_topic_t *rkt;     /**< Topic */
    int32_t partition;         /**< Partition */
    void   *payload;           /**< Producer: original message payload.
                    * Consumer: Depends on the value of \c err :
                    * - \c err==0: Message payload.
                    * - \c err!=0: Error string */
    size_t  len;               /**< Depends on the value of \c err :
                    * - \c err==0: Message payload length
                    * - \c err!=0: Error string length */
    void   *key;               /**< Depends on the value of \c err :
                    * - \c err==0: Optional message key */
    size_t  key_len;           /**< Depends on the value of \c err :
                    * - \c err==0: Optional message key length*/
    int64_t offset;            /**< Consume:
                                    * - Message offset (or offset for error
                    *   if \c err!=0 if applicable).
                                    * - dr_msg_cb:
                                    *   Message offset assigned by broker.
                                    *   If \c produce.offset.report is set then
                                    *   each message will have this field set,
                                    *   otherwise only the last message in
                                    *   each produced internal batch will
                                    *   have this field set, otherwise 0. */
    void  *_private;           /**< Consume:
                    *  - rdkafka private pointer: DO NOT MODIFY
                    *  - dr_msg_cb:
                                    *    msg_opaque from produce() call */
} rd_kafka_message_t;

Thanks for all your help so far.

skissane commented 3 years ago

@patrickmay I don't know anything about cl-autowrap (never used it), but I can explain that c2ffi warning: Unhandled environment: '' for triple 'x86_64-apple-darwin9' message you got. c2ffi expects the "triple" to actually be a "quadruple" and have -gnu on the end. That's because Clang supports two different modes of operation ("environment types"). The two environment types support by C2FFI are GNU mode (compatible with GCC) and MSVC mode (compatible with Microsoft tools), and the -gnu is necessary to tell it to use GNU mode. The warning is to tell you it didn't detect either GNU mode or MSVC mode, which may have some impact, or it might not. (It does change Clang's behaviour in some ways, around whether it accepts certain GNU or Microsoft extensions, so it really depends on whether you are using any of those extensions in the input header files.) (As you can see from that link, LLVM supports a lot more environment types than just GNU and MSVC; right now C2FFI doesn't support any of those others, I don't know if it would be useful for it to do so or not.)