Add support for config provider for any property #1039
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
SNOW-XXXXX
Currently, config providers are allowed for connection-related properties only - for any other property, validation runs and fails, as it's run before the config provider resolves values.
This PR adds a check of validation error and if it's caused by configuration provider - removes the error.
Unfortunately - the way it's done is hacky(uses some bad encapsulation from KafkaConnect authors).
I can find a better way to do so(such as rewriting code to extend each validator from a config-aware one, so that the value is checked only if it's not supposed to be resolved by the config provider).
Please let me know if this improvement can be accepted on your side, and I'll adjust the code as you'd like.
The motivation is that likely user should be able to provide any config parameter using the config provider for whatever reason, and there should be no hidden assumptions about the secrecy of the parameters.
P.S. The true motivation in my case is that we are planning to consume from ~100 topics in one setup, and the connector's config is just too long for AWS MSK to handle(not sure whether this is MSK or KafkaConnect itself). The issue is exaggerated by an even bigger topic2table map(compared to a list of topics).
P.P.S. I was also considering why validation happens before config resolution, but cannot find meaningful results. Also I was considering to resolve these values by writing some code(it's possible 100%) - but didn't find existing KafkaConnect Connectors to do so(checked JDBC/S3 sinks).
Pre-review checklist
snowflake.ingestion.method
.Yes
- Added end to end and Unit Tests.No
- Suggest why it is not param protectedLikely no risks, as existing code will remain the same, while for some portion of users it may be useful.