Skip to content

Conversation

@lkuchars
Copy link
Contributor

@lkuchars lkuchars commented Feb 10, 2026

Summary

NIFI-15563

Tracking

Please complete the following tracking steps prior to pull request creation.

Issue Tracking

Pull Request Tracking

  • Pull Request title starts with Apache NiFi Jira issue number, such as NIFI-00000
  • Pull Request commit message starts with Apache NiFi Jira issue number, as such NIFI-00000
  • Pull request contains commits signed with a registered key indicating Verified status

Pull Request Formatting

  • Pull Request based on current revision of the main branch
  • Pull Request refers to a feature branch with one commit containing changes

Verification

Please indicate the verification steps performed prior to pull request creation.

Build

  • Build completed using ./mvnw clean install -P contrib-check
    • JDK 21
    • JDK 25

Licensing

  • New dependencies are compatible with the Apache License 2.0 according to the License Policy
  • New dependencies are documented in applicable LICENSE and NOTICE files

Documentation

  • Documentation formatting appears as expected in rendered files

@lkuchars lkuchars force-pushed the lkucharski/NIFI-15563-report-lag-from-consume-kafka branch from c8786f0 to 51e75e1 Compare February 10, 2026 17:52
Copy link
Contributor

@exceptionfactory exceptionfactory left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting together this implementation with the current lag gauge recording @lkuchars. The general strategy looks good, and I recommended some implementation adjustments.

@Override
public ByteRecord next() {
ByteRecord record = delegate.next();
topicPartitionSummaries.add(new TopicPartitionSummary(record.getTopic(), record.getPartition()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach results in creating a new TopicPartitionSummary for every Record, which can create a lot of objects in a short period of time. Instead, keeping track of the last TopicPartitionSummary and comparing current topic and partition values should be a way to avoid unnecessary object creation.

Comment on lines 511 to 524
Map<TopicPartitionSummary, OptionalLong> topicPartitionLag =
topicPartitionSummaries.stream()
.map(ps -> new TopicPartitionSummary(ps.getTopic(), ps.getPartition()))
.collect(Collectors.toMap(
Function.identity(),
tps -> consumerService.currentLag(tps)
));

topicPartitionLag.forEach((tps, lag) -> {
if (lag.isPresent()) {
final String gaugeName = makeLagMetricName(tps);
session.recordGauge(gaugeName, lag.getAsLong(), CommitTiming.NOW);
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This functional approach results in creating an unnecessary intermediate Map. I recommend rewriting this using traditional for-each loops and avoiding the intermediate Map creation.

}

String makeLagMetricName(final TopicPartitionSummary tps) {
return "consume.kafka." + tps.getTopic() + "." + tps.getPartition() + ".currentLag";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of String concatenation, I recommend using a static format string and adjusting the format without the consume.kafka prefix, since the context of the gauge already includes the Processor Type.

@lkuchars
Copy link
Contributor Author

Thanks for the review. Requested changes are applied.

Copy link
Contributor

@exceptionfactory exceptionfactory left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @lkuchars. Please review the Checkstyle warnings, but the overall implementation looks close to completion.

@lkuchars
Copy link
Contributor Author

Thanks for the updates @lkuchars. Please review the Checkstyle warnings, but the overall implementation looks close to completion.

I'm sorry about that @exceptionfactory . Turns out, I ran contrib-check in the AWS bundle instead. Thank you for the review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants