★ Pass on Your First TRY ★ 100% Money Back Guarantee ★ Realistic Practice Exam Questions
Free Instant Download NEW CCDAK Exam Dumps (PDF & VCE):
Available on:
https://www.certleader.com/CCDAK-dumps.html
Exam Code: CCDAK (Practice Exam Latest Test Questions VCE PDF)
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Certification Provider: Confluent
Free Today! Guaranteed Training- Pass CCDAK Exam.
Online Confluent CCDAK free dumps demo Below:
NEW QUESTION 1
In Kafka, every broker... (select three)
- A. contains all the topics and all the partitions
- B. knows all the metadata for all topics and partitions
- C. is a controller
- D. knows the metadata for the topics and partitions it has on its disk
- E. is a bootstrap broker
- F. contains only a subset of the topics and the partitions
Answer: BEF
Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the metadata and each broker is a bootstrap broker, but only one of them is elected controller
NEW QUESTION 2
An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream- table join of these topics. How should you proceed?
- A. Repartition the purchase topic to have 3 partitions
- B. Repartition customer topic to have 5 partitions
- C. Model customer as a GlobalKTable
- D. Do a KStream / KTable join after a repartition step
Answer: C
Explanation:
In case of KStream-KStream join, both need to be co-partitioned. This restriction is not applicable in case of join with GlobalKTable, which is the most efficient here.
NEW QUESTION 3
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=all can't produce?
- A. 2
- B. 1
- C. 3
Answer: C
Explanation:
acks=all and min.insync.replicas=2 means we must have at least 2 brokers up for the partition to be available
NEW QUESTION 4
To transform data from a Kafka topic to another one, I should use
- A. Kafka Connect Sink
- B. Kafka Connect Source
- C. Consumer + Producer
- D. Kafka Streams
Answer: D
Explanation:
Kafka Streams is a library for building streaming applications, specifically applications that transform input Kafka topics into output Kafka topics
NEW QUESTION 5
A producer application in a developer machine was able to send messages to a Kafka topic. After copying the producer application into another developer's machine, the producer is able to connect to Kafka but unable to produce to the same Kafka topic because of an authorization issue. What is the likely issue?
- A. Broker configuration needs to be changed to allow a different producer
- B. You cannot copy a producer application from one machine to another
- C. The Kafka ACL does not allow another machine IP
- D. The Kafka Broker needs to be rebooted
Answer: C
Explanation:
ACLs take "Host" as a parameter, which represents an IP. It can be * (all IP), or a specific IP. Here, it's a specific IP as moving a producer to a different machine breaks the consumer, so the ACL needs to be updated
NEW QUESTION 6
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?
- A. Kerberos
- B. SASL
- C. HTTPS (SSL/TLS)
- D. HTTP
Answer: C
Explanation:
TLS - but it is still called SSL.
NEW QUESTION 7
Which of the following setting increases the chance of batching for a Kafka Producer?
- A. Increase batch.size
- B. Increase message.max.bytes
- C. Increase the number of producer threads
- D. Increase linger.ms
Answer: D
Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating batches
NEW QUESTION 8
How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?
- A. kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions
- B. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions
- C. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
- D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions
Answer: D
NEW QUESTION 9
You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?
- A. 3
- B. 2
- C. 1
- D. 6
Answer: C
Explanation:
JDBC connector allows one task per table.
NEW QUESTION 10
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?
- A. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
- B. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
- C. The broker will crash
- D. The broker will start, and won't have any dat
- E. If the broker comes leader, we have a data loss
Answer: B
Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!
NEW QUESTION 11
A Kafka producer application wants to send log messages to a topic that does not include any key. What are the properties that are mandatory to configure for the producer configuration? (select three)
- A. bootstrap.servers
- B. partition
- C. key.serializer
- D. value.serializer
- E. key
- F. value
Answer: ACD
Explanation:
Both key and value serializer are mandatory.
NEW QUESTION 12
To produce data to a topic, a producer must provide the Kafka client with...
- A. the list of brokers that have the data, the topic name and the partitions list
- B. any broker from the cluster and the topic name and the partitions list
- C. all the brokers from the cluster and the topic name
- D. any broker from the cluster and the topic name
Answer: D
Explanation:
All brokers can respond to a Metadata request, so a client can connect to any broker in the cluster and then figure out on its own which brokers to send data to.
NEW QUESTION 13
To continuously export data from Kafka into a target database, I should use
- A. Kafka Producer
- B. Kafka Streams
- C. Kafka Connect Sink
- D. Kafka Connect Source
Answer: C
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.
NEW QUESTION 14
There are two consumers C1 and C2 belonging to the same group G subscribed to topics T1 and T2. Each of the topics has 3 partitions. How will the partitions be assigned to consumers with Partition Assigner being Round Robin Assigner?
- A. C1 will be assigned partitions 0 and 2 from T1 and partition 1 from T2. C2 will have partition 1 from T1 and partitions 0 and 2 from T2.
- B. Two consumers cannot read from two topics at the same time
- C. C1 will be assigned partitions 0 and 1 from T1 and T2, C2 will be assigned partition 2 from T1 and T2.
- D. All consumers will read from all partitions
Answer: A
Explanation:
The correct option is the only one where the two consumers share an equal number of partitions amongst the two topics of three partitions. An interesting article to read ishttps://medium.com/@anyili0928/what-i-have-learned-from-kafka-partition-assignment- strategy-799fdf15d3ab
NEW QUESTION 15
We have a store selling shoes. What dataset is a great candidate to be modeled as a KTable in Kafka Streams?
- A. Money made until now
- B. The transaction stream
- C. Items returned
- D. Inventory contents right now
Answer: AC
Explanation:
Aggregations of stream are stored in table, whereas Streams must be modeled as a KStream to avoid data explosion
NEW QUESTION 16
A consumer wants to read messages from partitions 0 and 1 of a topic topic1. Code snippet is shown below.
consumer.subscribe(Arrays.asList("topic1")); List<TopicPartition> pc = new ArrayList<>();
pc.add(new PartitionTopic("topic1", 0));
pc.add(new PartitionTopic("topic1", 1)); consumer.assign(pc);
- A. This works fin
- B. subscribe() will subscribe to the topic and assign() will assign partitions to the consumer.
- C. Throws IllegalStateException
Answer: B
Explanation:
subscribe() and assign() cannot be called by the same consumer, subscribe() is used to leverage the consumer group mechanism, while assign() is used to manually control partition assignment and reads assignment
NEW QUESTION 17
......
Thanks for reading the newest CCDAK exam dumps! We recommend you to try the PREMIUM Certleader CCDAK dumps in VCE and PDF here: https://www.certleader.com/CCDAK-dumps.html (150 Q&As Dumps)