class KafkaProducer[K, V] extends WriteStream[KafkaProducerRecord[K, V]]
Vert.x Kafka producer.
The provides global control over writing a record.
- Alphabetic
- By Inheritance
- KafkaProducer
- WriteStream
- StreamBase
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new KafkaProducer(_asJava: AnyRef)(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[K], arg1: scala.reflect.api.JavaUniverse.TypeTag[V])
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
asJava: AnyRef
- Definition Classes
- KafkaProducer → WriteStream → StreamBase
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
close(timeout: Long, completionHandler: Handler[AsyncResult[Unit]]): Unit
Close the producer * @param timeout timeout to wait for closing
Close the producer * @param timeout timeout to wait for closing
- completionHandler
handler called on operation completed
-
def
close(completionHandler: Handler[AsyncResult[Unit]]): Unit
Close the producer * @param completionHandler handler called on operation completed
-
def
close(): Unit
Close the producer
-
def
closeFuture(timeout: Long): Future[Unit]
Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.
-
def
closeFuture(): Future[Unit]
Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.
-
def
drainHandler(handler: Handler[Unit]): KafkaProducer[K, V]
Set a drain handler on the stream.
Set a drain handler on the stream. If the write queue is full, then the handler will be called when the write queue is ready to accept buffers again. See io.vertx.scala.core.streams.Pump for an example of this being used.
The stream implementation defines when the drain handler, for example it could be when the queue size has been reduced to
maxSize / 2
. * @param handler the handler- returns
a reference to this, so the API can be used fluently
- Definition Classes
- KafkaProducer → WriteStream
-
def
end(kafkaProducerRecord: KafkaProducerRecord[K, V]): Unit
Same as io.vertx.scala.core.streams.WriteStream#end but writes some data to the stream before ending.
Same as io.vertx.scala.core.streams.WriteStream#end but writes some data to the stream before ending.
- Definition Classes
- KafkaProducer → WriteStream
-
def
end(): Unit
Ends the stream.
Ends the stream.
Once the stream has ended, it cannot be used any more.
- Definition Classes
- KafkaProducer → WriteStream
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
exceptionHandler(handler: Handler[Throwable]): KafkaProducer[K, V]
Set an exception handler on the write stream.
Set an exception handler on the write stream. * @param handler the exception handler
- returns
a reference to this, so the API can be used fluently
- Definition Classes
- KafkaProducer → WriteStream → StreamBase
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
flush(completionHandler: Handler[Unit]): KafkaProducer[K, V]
Invoking this method makes all buffered records immediately available to write * @param completionHandler handler called on operation completed
Invoking this method makes all buffered records immediately available to write * @param completionHandler handler called on operation completed
- returns
current KafkaProducer instance
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
partitionsFor(topic: String, handler: Handler[AsyncResult[Buffer[PartitionInfo]]]): KafkaProducer[K, V]
Get the partition metadata for the give topic.
Get the partition metadata for the give topic. * @param topic topic partition for which getting partitions info
- handler
handler called on operation completed
- returns
current KafkaProducer instance
-
def
partitionsForFuture(topic: String): Future[Buffer[PartitionInfo]]
Like partitionsFor but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.
-
def
setWriteQueueMaxSize(i: Int): KafkaProducer[K, V]
Set the maximum size of the write queue to
maxSize
.Set the maximum size of the write queue to
maxSize
. You will still be able to write to the stream even if there is more thanmaxSize
items in the write queue. This is used as an indicator by classes such asPump
to provide flow control.The value is defined by the implementation of the stream, e.g in bytes for a io.vertx.scala.core.net.NetSocket, the number of io.vertx.scala.core.eventbus.Message for a io.vertx.scala.core.eventbus.MessageProducer, etc... * @param maxSize the max size of the write stream
- returns
a reference to this, so the API can be used fluently
- Definition Classes
- KafkaProducer → WriteStream
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
write(record: KafkaProducerRecord[K, V], handler: Handler[AsyncResult[RecordMetadata]]): KafkaProducer[K, V]
Asynchronously write a record to a topic * @param record record to write
Asynchronously write a record to a topic * @param record record to write
- handler
handler called on operation completed
- returns
current KafkaWriteStream instance
-
def
write(kafkaProducerRecord: KafkaProducerRecord[K, V]): KafkaProducer[K, V]
Write some data to the stream.
Write some data to the stream. The data is put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the io.vertx.scala.core.streams.WriteStream#writeQueueFull method before writing. This is done automatically if using a io.vertx.scala.core.streams.Pump. * @param data the data to write
- returns
a reference to this, so the API can be used fluently
- Definition Classes
- KafkaProducer → WriteStream
-
def
writeFuture(record: KafkaProducerRecord[K, V]): Future[RecordMetadata]
Like write but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.
-
def
writeQueueFull(): Boolean
This will return
true
if there are more bytes in the write queue than the value set using io.vertx.scala.core.streams.WriteStream#setWriteQueueMaxSize * @return true if write queue is fullThis will return
true
if there are more bytes in the write queue than the value set using io.vertx.scala.core.streams.WriteStream#setWriteQueueMaxSize * @return true if write queue is full- Definition Classes
- KafkaProducer → WriteStream