Releases: confluentinc/librdkafka
v1.2.2
librdkafka v1.2.2 release
v1.2.2 fixes the producer performance regression introduced in v1.2.1 which may affect high-throughput producer applications.
Fixes
- Fix producer insert msgq regression in v1.2.1 (#2450).
- Upgrade builtin lz4 to 1.9.2 (CVE-2019-17543, #2598).
- Don't trigger error when broker hostname changes (#2591).
- Less strict message.max.bytes check for individual messages (#993).
- Don't call timespec_get() on OSX (since it was removed in recent XCode) by @maparent .
- configure: add --runstatedir for compatibility with autoconf.
- LZ4 is available from ProduceRequest 0, not 3 (fixes assert in #2480).
- Address 12 code issues identified by Coverity static code analysis.
Enhancements
- Add warnings for inconsistent security configuration.
- Optimizations to hdr histogram (stats) rollover.
- Reorganized examples and added a cleaner consumer example, added minimal C++ producer example.
- Print compression type per message-set when
debug=msg
Checksums
Release asset checksums:
- v1.2.2.zip SHA256
7557b37e5133ed4c9b0cbbc3fd721c51be8e934d350d298bd050fcfbc738e551
- v1.2.2.tar.gz SHA256
c5d6eb6ce080431f2996ee7e8e1f4b8f6c61455a1011b922e325e28e88d01b53
v1.2.1
librdkafka v1.2.1 release
Warning: v1.2.1 has a producer performance regression which may affect high-throughput producer applications. We recommend such users to upgrade to v1.3.0
v1.2.1 is a maintenance release:
- Properly handle new Kafka-framed SASL GSSAPI frame semantics on Windows (#2542).
This bug was introduced in v1.2.0 and broke GSSAPI authentication on Windows. - Fix msgq (re)insertion code to avoid O(N^2) insert sort operations on retry (#2508)
The msgq insert code now properly handles interleaved and overlapping
message range inserts, which may occur during Producer retries for
high-throughput applications. - configure: added
--disable-c11threads
to avoid using libc-provided C11 threads. - configure: added more autoconf compatibility options to ignore
Checksums
Release asset checksums:
- v1.2.1.zip SHA256
8b5e95318b190f40cbcd4a86d6a59dbe57b54a920d8fdf64d9c850bdf05002ca
- v1.2.1.tar.gz SHA256
f6be27772babfdacbbf2e4c5432ea46c57ef5b7d82e52a81b885e7b804781fd6
v1.2.0
librdkafka v1.2.0 release
WARNING: There is an issue with SASL GSSAPI authentication on Windows with this release. Upgrade directly to v1.2.1 which fixes the issue.
v1.2.0 is a feature release making the consumer transaction aware.
- Transaction aware consumer (
isolation.level=read_committed
) implemented by @mhowlett. - Sub-millisecond buffering (
linger.ms
) on the producer. - Improved authentication errors (KIP-152)
Consumer-side transaction support
This release adds consumer-side support for transactions.
In previous releases, the consumer always delivered all messages to the application, even those in aborted or not yet committed transactions. In this release, the consumer will by default skip messages in aborted transactions.
This is controlled through the new isolation.level
configuration property which
defaults to read_committed
(only read committed messages, filter out aborted and not-yet committed transactions), to consume all messages, including for aborted transactions, you may set this property to read_uncommitted
to get the behaviour of previous releases.
For consumers in read_committed
mode, the end of a partition is now defined to be the offset of the last message of a successfully committed transaction (referred to as the 'Last Stable Offset').
For non-transactional messages there is no change from previous releases, they will always be read, but a consumer will not advance into a not yet committed transaction on the partition.
Upgrade considerations
linger.ms
default was changed from 0 to 0.5 ms to promote some level of batching even with default settings.
New configuration properties
- Consumer property
isolation.level=read_committed
ensures the consumer will only read messages from successfully committed producer transactions. Default isread_committed
. To get the previous behaviour, set the property toread_uncommitted
, which will read all messages produced to a topic, regardless if the message was part of an aborted or not yet committed transaction.
Enhancements
- Offset commit metadata (arbitrary application-specified data) is now returned by
rd_kafka_committed()
andrd_kafka_offsets_for_times()
(@damour, #2393) - C++: Added
Conf::c_ptr*()
to retrieve the underlying C config object. - Added
on_thread_start()
andon_thread_exit()
interceptors. - Increase
queue.buffering.max.kbytes
max to INT_MAX. - Optimize varint decoding, increasing consume performance by ~15%.
Fixes
General:
- Rate limit IO-based queue wakeups to
linger.ms
, this reduces CPU load and lock contention for high throughput producer applications. (#2509) - Reduce memory allocations done by
rd_kafka_topic_partition_list_new()
. - Fix socket recv error handling on MSVC (by Jinsu Lee).
- Avoid 1s stalls in some scenarios when broker wakeup-fd is triggered.
- SSL: Use only hostname (not port) when valid broker hostname (by Hunter Jacksson)
- SSL: Ignore OpenSSL cert verification results if
enable.ssl.certificate.verification=false
(@salisbury-espinosa, #2433) - rdkafka_example_cpp: fix metadata listing mode (@njzcx)
- SASL Kerberos/GSSAPI: don't treat kinit ECHILD errors as errors (@hannip, #2421)
- Fix compare overflows (#2443)
- configure: Add option to disable automagic dependency on zstd (by Thomas Deutschmann)
- Documentation updates and fixes by Cedric Cellier and @ngrandem
- Set thread name on MacOS X (by Nikhil Benesch)
- C++: Fix memory leak in
Headers
(by Vladimir Sakharuk) - Fix UBSan (undefined behaviour errors) (@PlacidBox, #2417)
- CONFIGURATION.md: escape
||
inside markdown table (@mhowlett) - Refresh broker list metadata even if no topics to refresh (#2476)
Consumer:
- Make
rd_kafka_pause|resume_partitions()
synchronous, making sure that a subsequentconsumer_poll()
will not return messages for the paused partitions (#2455). - Fix incorrect toppar destroy in OffsetRequest (@binary85, #2379)
- Fix message version 1 offset calculation (by Martin Ivanov)
- Defer commit in transport error to avoid consumer_close hang.
Producer:
- Messages were not timed out for leader-less partitions (.NET issue #1027).
- Improve message timeout granularity to millisecond precision (the smallest ffective message timeout will still be 1000ms).
message.timeout.ms=0
is now accepted even iflinger.ms
> 0 (by Jeff Snyder)- Don't track
max.poll.interval.ms
unless in Consumer mode, this saves quite a few memory barries for high-performance Producers. - Optimization: avoid atomic fatal error code check when idempotence is disabled.
Checksums
Release asset checksums:
- v1.2.0.zip SHA256
6e57f09c28e9a65abb886b84ff638b2562b8ad71572de15cf58578f3f9bc45ec
- v1.2.0.tar.gz SHA256
eedde1c96104e4ac2d22a4230e34f35dd60d53976ae2563e3dd7c27190a96859
v1.1.0
librdkafka v1.1.0 release
v1.1.0 is a security-focused feature release:
- SASL OAUTHBEARER support (by @rondagostino at StateStreet)
- In-memory SSL certificates (PEM, DER, PKCS#12) support (by @noahdav at Microsoft)
- Pluggable broker SSL certificate verification callback (by @noahdav at Microsoft)
- Use Windows Root/CA SSL Certificate Store (by @noahdav at Microsoft)
ssl.endpoint.identification.algorithm=https
(off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.- Improved GSSAPI/Kerberos ticket refresh
Upgrade considerations
- Windows SSL users will no longer need to specify a CA certificate file/directory (
ssl.ca.location
), librdkafka will load the CA certs by default from the Windows Root Certificate Store. - SSL peer (broker) certificate verification is now enabled by default (disable with
enable.ssl.certificate.verification=false
) %{broker.name}
is no longer supported insasl.kerberos.kinit.cmd
since kinit refresh is no longer executed per broker, but per client instance.
SSL
New configuration properties:
ssl.key.pem
- client's private key as a string in PEM formatssl.certificate.pem
- client's public key as a string in PEM formatenable.ssl.certificate.verification
- enable(default)/disable OpenSSL's builtin broker certificate verification.enable.ssl.endpoint.identification.algorithm
- to verify the broker's hostname with its certificate (disabled by default).- Add new
rd_kafka_conf_set_ssl_cert()
to pass PKCS#12, DER or PEM certs in (binary) memory form to the configuration object. - The private key data is now securely cleared from memory after last use.
Enhancements
- configure: Improve library checking
- Added
rd_kafka_conf()
to retrieve the client's configuration object - Bump
message.timeout.ms
max value from 15 minutes to 24 days (@sarkanyi, workaround for #2015)
Fixes
- SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
- SASL GSSAPI/Kerberos: Changed
sasl.kerberos.kinit.cmd
to first attempt ticket refresh, then acquire. - SASL: Proper locking on broker name acquisition.
- Consumer:
max.poll.interval.ms
now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval. - configure: Fix libzstd static lib detection
- rdkafka_performance: Fix for Misleading "All messages delivered!" message (@solar_coder)
- Windows build and CMake fixes (@myd7349)
Checksums
Release asset checksums:
- v1.1.0.zip SHA256
70279676ed863c984f9e088db124ac84a080e644c38d4d239f9ebd3e3c405e84
- v1.1.0.tar.gz SHA256
123b47404c16bcde194b4bd1221c21fdce832ad12912bd8074f88f64b2b86f2b
v1.0.1
librdkafka v1.0.1 release
v1.0.1 is a maintenance release with the following fixes:
- Fix consumer stall when broker connection goes down (issue #2266 introduced in v1.0.0)
- Fix AdminAPI memory leak when broker does not support request (@souradeep100, #2314)
- Update/fix protocol error response codes (@benesch)
- Treat ECONNRESET as standard Disconnects (#2291)
v1.0.0
librdkafka v1.0.0 release
v1.0.0 is a major feature release:
- Idempotent producer - guaranteed ordering, exactly-once producing.
- Sparse/on-demand connections - connections are no longer maintained to all brokers in the cluster.
- KIP-62 -
max.poll.interval.ms
for high-level consumers
This release also changes configuration defaults and deprecates a set
of configuration properties, make sure to read the Upgrade considerations
section below.
Upgrade considerations (IMPORTANT)
librdkafka v1.0.0 is API (C & C++) and ABI (C) compatible with older versions of librdkafka, but there are changes to configuration properties that may require changes to existing applications.
Configuration default changes
The following configuration properties have changed default values, which
may require application changes:
acks
(aliasrequest.required.acks
) default is nowall
(wait for ack from all in-sync replica brokers), the previous default was1
(only wait for ack from partition leader) which could cause data loss if the leader broker goes down.enable.partition.eof
is nowfalse
by default. Applications that rely onERR__PARTITION_EOF
to be emitted must now explicitly set this property totrue
. This change was made to simplify the common case consumer application consume loop.broker.version.fallback
was changed from0.9
to0.10
andbroker.version.fallback.ms
was changed to 0.
Users on Apache Kafka <0.10 must setapi.version.request=false
andbroker.version.fallback=..
to their broker version. For users >=0.10 there is no longer any need to specify any of these properties. See https://github.com/edenhill/librdkafka/wiki/Broker-version-compatibility for more information.
Deprecated configuration properties
topic.metadata.refresh.fast.cnt
is no longer used.socket.blocking.max.ms
is no longer used.reconnect.backoff.jitter.ms
is no longer used, seereconnect.backoff.ms
andreconnect.backoff.max.ms
.offset.store.method=file
is deprecated.offset.store.path
is deprecated.offset.store.sync.interval.ms
is deprecated.queuing.strategy
was an experimental property that is now deprecated.msg_order_cmp
was an experimental property that is now deprecated.produce.offset.report
is no longer used. Offsets are always reported.auto.commit.enable
(topic level) for the simple (legacy) consumer is
now deprecated.
Use of any deprecated configuration property will result in a warning when the client instance is created.
The deprecated configuration properties will be removed in a future version of librdkafka.
See issue #2020 for more information.
Configuration checking
The checks for incompatible configuration has been improved, the client
instantiation (rd_kafka_new()
) will now fail if incompatible configuration
is detected.
max.poll.interval.ms
is enforced
This release adds support for max.poll.interval.ms
(KIP-62), which requires
the application to call rd_kafka_consumer_poll()/rd_kafka_poll()
at least every max.poll.interval.ms
.
Failure to do so will make the consumer automatically leave the group, causing a group rebalance,
and not rejoin the group until the application has called ..poll() again, triggering yet another group rebalance.
max.poll.interval.ms
is set to 5 minutes by default.
Idempotent Producer
This release adds support for Idempotent Producer, providing exactly-once
producing and guaranteed ordering of messages.
Enabling idempotence is as simple as setting the enable.idempotence
configuration property to true
.
There are no required application changes, but it is recommended to add
support for the newly introduced fatal errors that will be triggered when the idempotent producer encounters an unrecoverable error that would break the ordering or duplication guarantees.
See Idempotent Producer in the manual and the Exactly once semantics blog post for more information.
Sparse connections
In previous releases librdkafka would maintain open connections to all
brokers in the cluster and the bootstrap servers.
With this release librdkafka now connects to a single bootstrap server
to retrieve the full broker list, and then connects to the brokers
it needs to communicate with: partition leaders, group coordinators, etc.
For large scale deployments this greatly reduces the number of connections
between clients and brokers, and avoids the repeated idle connection closes
for unused connections.
See Sparse connections in the manual for more information.
Original issue #825.
Features
- Add support for ZSTD compression (KIP-110, @mvavrusa. Caveat: will not currently work with topics configured with
compression.type=zstd
, instead usecompression.type=producer
, see #2183) - Added
max.poll.interval.ms
(KIP-62, #1039) to allow long processing times. - Message Header support for C++ API (@davidtrihy)
Enhancements
- Added rd_kafka_purge() API to purge messages from producer queues (#990)
- Added fatal errors (see
ERR__FATAL
andrd_kafka_fatal_error()
) to
raise unrecoverable errors to the application. Currently only triggered
by the Idempotent Producer. - Added
rd_kafka_message_status()
producer API that may be used from
the delivery report callback to know if the message was persisted to brokers
or not. This is useful for applications that want to perform manual retries
of messages, to know if a retry could lead to duplication. - Backoff reconnects exponentially (See
reconnect.backoff.ms
andreconnect.backoff.max.ms
). - Add broker[..].req["reqType"] per-request-type metrics to statistics.
- CONFIGURATION.md: Added Importance column.
./configure --install-deps
(and also--source-deps-only
) will automatically install dependencies through the native package manager and/or from source.
Fixes
General
rd_kafka_version()
was not thread safe- Round up microsecond->millisecond timeouts to 1ms in internal scheduler
to avoid CPU-intensive busy-loop. - Send connection handshake requests before lower prio requests.
- Fix timespec conversion to avoid infinite loop (#2108, @boatfish)
- Fix busy-loop: Don't set POLLOUT (due to requests queued) in CONNECT state (#2118)
- Broker hostname max size increased from 127 to 255 bytes (#2171, @Vijendra07Kulhade)
Consumer
- C++: Fix crash when Consumer ctor fails
- Make sure LeaveGroup is sent on unsubscribe and consumer close (#2010, #2040)
- Remember assign()/seek():ed offset when pause()ing (#2105)
- Fix handling of mixed MsgVersions in same FetchResponse (#2090)
Producer
- Added
delivery.timeout.ms
->message.timeout.ms
alias - Prevent int overflow while computing abs_timeout for producer request… (#2050, @KseniyaYakil).
- Producer: fix re-ordering corner-case on retry.
Windows
- win32:
cnd_timedwait*()
could leave the cond signalled, resulting in high CPU usage.
Build/installation/tooling
- Makefile: fix install rule (#2049, @pacovn)
- Fixing Counting error in rdkafka_performance #1542 (#2028, @gnanasekarl)
- OpenSSL 1.1.0 compatibility (#2000, @nouzun, @wiml)
- Set OpenSSL locking callbacks as required, dont call CRYPTO_cleanup_all_ex_data (#1984, @ameihm0912)
- Fix 64-bit IBM i build error (#2017, @ThePrez)
- CMake: Generate pkg-config files (@Oxymoron79, #2075)
- mklove: suggest brew packages to install on osx
- rdkafka_performance: Add an option to dump the configuration (@AKohn)
- Check for libcrypto explicitly (OSX Mojave, #2089)
v0.11.6
v0.11.6 is a maintenance release.
Critical fixes
- The internal timer could wrap in under 8 days on Windows, causing stalls and hangs. Bug introduced in v0.11.5. #1980
- Purge and retry buffers in outbuf queue on connection fail (#1913). Messages could get stuck in internal queues on retry and broker down.
Enhancements
- Enable low latency mode on Windows by using TCP "pipe". Users no longer need to set
socket.blocking.max.ms
to improve latency. (#1930, @LavaSpider) - Added rd_kafka_destroy_flags() to control destroy behaviour. Can be used to force an consumer to terminate without leaving the group or committing final offsets.
Fixes
- Producer: Serve UA queue when transitioning topic from UNKNOWN. Messages could get stuck in UA partition queue on metadata timeout (#1985).
- Fix partial read issue on unix platforms without recvmsg()
- Improve disconnect detection on Windows (#1937)
- Use atomics for refcounts on all platforms with atomics support (#1873)
- Message err was not set for on_ack interceptors on broker reply (#1892)
- Fix consumer_lag to -1 when neither app_offset or commmitted_offset is available (#1911)
- Fix crash: Insert retriable messages on partition queue, not xmit queue (#1965)
- Fix crash: don't enqueue messages for retry when handle is terminating.
- Disconnect regardless of socket.max.fails when partial request times out (#1955)
- Now builds with libressl (#1896, #1901, @secretmike)
- Call poll when flush() is called with a timeout of 0 (#1950)
- Proper locking of set_fetch_state() when OffsetFetch response is outdated
- Destroy unknown and no longer desired partitions (fixes destroy hang)
- Handle FetchResponse for partitions that were removed during fetch (#1948)
- rdkafka_performance: Default the message size to the length of the pattern unless given explicitly (#1899, @ankon)
- Fix timeout reuse in queue serving/polling functions (#1863)
- Fix rd_atomic*_set() to use __atomic or __sync when available
- NuGet: change runtime from win7-.. to more generic win-.. (CLIENTS-1188)
- Fix crash: failed ops enq could be handled by original dest queue callbacks
- Update STATISTICS.md to match code (@ankon, #1936)
- rdkafka_performance: Don't sleep while waiting for delivery reports (#1918, @ankon)
- Remove unnecessary 100ms sleep when broker goes from UP -> DOWN (#1895)
- rdhdrhistogram: Fix incorrect float -> int cast causing havoc on MIPS (@andoma)
- "Message size too large": The receive buffer would grow by x2 (up to the limit) each time a EOS control message was seen. (#1472)
- Added support for system-provided C11 threads, e.g. alpine/musl. This fixes weird behaviour on Alpine (#1998)
- Fix high CPU usage after disconnect from broker while waiting for SASL response (#2032, @stefanseufert)
- Fix dont-destroy-from-rdkafka-thread detection logic to avoid assert when using plugins.
- Fix LTO warnings with gcc 8 (#2038, @Romain-Geissler-1A)
v0.11.5
v0.11.5 is a feature release that adds support for the Kafka Admin API (KIP-4).
Admin API
This release adds support for the Admin API, enabling applications and users to perform administrative Kafka tasks programmatically:
- Create topics - specifying partition count, replication factor and topic configuration.
- Delete topics - delete topics in cluster.
- Create partitions - extend a topic with additional partitions.
- Alter configuration - set, modify or delete configuration for any Kafka resource (topic, broker, ..).
- Describe configuration - view configuration for any Kafka resource.
The API closely follows the Java Admin API:
https://github.com/edenhill/librdkafka/blob/master/src/rdkafka.h#L4495
New and updated configuration
- Added
compresion.level
configuration option, which allows fine-tuning of gzip and LZ4 comression level (@erkoln) - Implement
ssl.curves.list
andssl.sigalgs.list
configuration settings (@jvgutierrez) - Changed
queue.buffering.backpressure.threshold
default (#1848)
Enhancements
- Callback based event notifications (@fede1024)
- Event callbacks may now optionally be triggered from a dedicated librdkafka background thread, see
rd_kafka_conf_set_background_event_cb
. - Log the value that couldn't be found for flag configuration options (@ankon)
- Add support for
rd_kafka_conf_set_events(conf, ..EVENT_ERROR)
to allow generic errors to be retrieved as events. - Avoid allocating BIOs and copying for base64 processing (@agl)
- Don't log connection close for idle connections (regardless of
log.connection.close
) - Improve latency by using high-precision QPC clock on Windows
- Added
make uninstall
- Minor documentation updates from replies to #1794 (@briot)
- Added
rd_kafka_controllerid()
to return the current controller. - INTRODUCTION.md: add chapter on latency measurement.
- Add relative hyperlinks to table of contents in INTRODUCTION.md (#1791, @stanislavkozlovski)
- Improved statistics:
- Added Hdr Histograms for all windowed stats (rtt, int_latency, throttle) #1798
- Added top-level totals for broker receive and transmit metrics.
- Added batchcnt, batchsize histograms to consumer.
- Added outbuf_latency histograms.
- STATISTICS.md moved from wiki to source tree.
Fixes
- Fixed murmur2 partitioner to make it compatible with java version (#1816, @lins05)
- Fix pause/resume: next_offset was not properly initialized
- Fix a segment fault error in rdkafka_buf with zero-length string (@sunny1988)
- Set error string length in rkmessage.len on error (#1851)
- Don't let metadata ERR_UNKNOWN set topic state to non-existent.
- The
app_offset
metric is now reset to INVALID when the fetcher is stopped. consumer_lag
is now calculated asconsumer_lag = hi_wmark_offset - MAX(app_offset, committed_offset)
, which makes it correct after a reassignment but before new messages have been consumed (#1878)socket.nagle.disable=true
was never applied on non-Windows platforms (#1838)- Update interface compile definitions for Windows using CMake (#1800, @raulbocanegra)
- Fix queue hang when queue_destroy() is called on rdkafka-owned queue (#1792)
- Fix hang on unclean termination when there are outstanding requests.
- Metadata: fix crash when topic is in transitionary state
- Metadata: sort topic partition list
- rdkafka_example emitted bogus produce errors
- Increase BROKERS_MAX to 10K and PARTITIONS_MAX to 100K
- Proper log message on SSL connection close
- Missing return on error causes use-after-free in SASL code (@sidhpurwala-huzaifa)
- Fix configure --pkg-config-path=... (#1797, @xbolshe)
- Fix -fsanitize=undefined warning for overflowed OP switches (#1789)
v0.11.4
Maintenance release
Default changes
socket.max.fails
changed to1
to provide same functionality (fail request immediately on error) now when retries are working properly again.fetch.max.bytes
(new config property) is automatically adjusted to be >=message.max.bytes
, andreceive.message.max.bytes
is automatically adjusted to be >fetch.max.bytes
. (#1616)
New features
- Message Headers support (with help from @johnistan)
- Java-compatible Murmur2 partitioners (#1468, @barrotsteindev)
- Add PKCS#12 Keystore support -
ssl.keystore.location
(#1494, @AMHIT)
Noteworthy fixes
- Formalise and fix Producer retries and retry-ordering (#623, #1092, #1432, #1476, #1421)
- Ordering is now retained despite retries if
max.in.flight=1
. - Behaviour is now documented
- Ordering is now retained despite retries if
- Fix timeouts for retried requests and properly handle retries for all request types (#1497)
- Add and use
fetch.max.bytes
to limit total Fetch response size (KIP-74, #1616). Fixes "Invalid response size" issues.
Enhancements
- Added
sasl.mechanism
andcompression.type
configuration property aliases for conformance with Java client. - Improved Producer performance
- C++: add c_ptr() to Handle,Topic,Message classes to expose underlying librdkafka object
- Honour per-message partition in produce_batch() if MSG_F_PARTITION set (@barrotsteindev, closes #1604)
- Added
on_request_sent()
interceptor - Added experimental flexible producer
queuing.strategy=fifo|lifo
- Broker address DNS record round-robin: try to maintain round-robin position across resolve calls.
- Set system thread name for internal librdkafka threads (@tbsaunde)
- Added more concise and user-friendly 'consumer' debug context
- Add
partitioner
(string) topic configuration property to set the builtin partitioners - Generate rdkafka-static.pc (pkg-config) for static linking
Fixes
- Fix producer memory leak on <0.11 brokers when compressed messageset is below copy threshold (closes #1534)
- CRC32C - fix unaligned access on ARM (@Soundman32)
- Fix read after free in buf_write_seek
- Fix broker wake up (#1667, @gduranceau)
- Fix consumer hang when rebalancing during commit (closes #1605, @ChenyuanHu)
- CMake fixes for Windows (@raulbocanegra)
- LeaveGroup was not sent on close when doing final offset commits
- Fix for consumer slowdown/stall on compacted topics where actual last offset < MsgSet.LastOffset (KAFKA-5443)
- Fix global->topic conf fallthru in C++ API
- Fix infinite loop on LeaveGroup failure
- Fix possible crash on OffsetFetch retry
- Incorporate compressed message count when deciding on fetch backoff (#1623)
- Fix debug-only crash on Solaris (%s NULL) (closes #1423)
- Drain broker ops queue on termination to avoid hang (closes #1596)
- cmake: Allow build static library (#1602, @proller)
- Don't store invalid offset as next one when pausing (#1453, @mfontanini)
- use #if instead of #ifdef / defined() for atomics (#1592, @vavrusa)
- fixed .lib paths in nuget packaging (#1587)
- Fixes strerror_r crash on alpine (#1580, @skarlsson)
- Allow arbitrary lengthed (>255) SASL PLAIN user/pass (#1691, #1692)
- Trigger ApiVersionRequest on reconnect if broker.version.fallback supports it (closes #1694)
- Read Fetch MsgAttributes as int8 (discovered by @tvoinarovskyi, closes #1689)
- Portability: stop using typeof in rdkafka_transport.c (#1708, @tbsaunde)
- Portability: replace use of #pragma once with header guards (#1688, @tbsaunde)
- mklove: add LIBS in reverse order to maintain dependency order
- Fix build when python is not available #1358
v0.11.3
Maintenance release
Default changes
- Change default queue.buffering.max.kbytes and queued.max.message.kbytes to 1GB (#1304)
- win32: Use sasl.kerberos.service.name for broker principal, not sasl.kerberos.principal (#1502)
Enhancements
- Default producer message offsets to OFFSET_INVALID rather than 0
- new nuget package layout + debian9 librdkafka build (#1513, @mhowlett)
- Allow for calling rd_kafka_queue_io_event_enable() from the C++ world (#1483, @akhi3030)
- rdkafka_performance: allow testing latency with different size messages (#1482, @tbsaunde)
Fixes
- Improved stability on termination (internal queues, ERR__DESTROY event)
- offsets_for_times() return ERR__TIMED_OUT if brokers did not respond in time
- Let list_groups() return ERR__PARTIAL with a partial group list (#1508)
- Properly handle infinite (-1) rd_timeout:s throughout the code (#1539)
- Fix offsets_store() return value when at least one valid partition
- portability: rdendian: add le64toh() alias for older glibc (#1463)
- Add MIPS build and fix CRC32 to work on big endian CPUs (@andoma, closes #1498)
- osx: fix endian checking for software crc32c
- Fix comparison in rd_list_remove_cmp (closes #1493)
- stop calling cnd_timedwait() with a timeout of 0h (#1481, @tbsaunde)
- Fix DNS cache logic broker.address.ttl (#1491, @dacjames)
- Fix broker thread "hang" in CONNECT state (#1397)
- Reset rkb_blocking_max_ms on broker DOWN to avoid busy-loop during CONNECT (#1397)
- Fix memory leak when producev() fails (#1478)
- Raise cmake minimum version to 3.2 (#1460)
- Do not assume LZ4 worst (best?) case 255x compression (#1446 by @tudor)
- Fix ALL_BROKERS_DOWN re-generation (fix by @ciprianpascu, #1101)
- rdkafka-performance: busy wait to wait short periods of time