Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840
Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840parveensania wants to merge 5 commits intoapache:masterfrom
Conversation
… and fallback channel
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the robustness of gRPC communication within Dataflow workers by implementing a failover mechanism for channels. It allows the system to gracefully handle primary channel connectivity issues by switching to a fallback channel and periodically attempting to restore the primary connection, thereby enhancing the overall stability and reliability of the worker's interaction with the Windmill service. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
R: @scwhittle |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
|
assign set of reviewers |
|
Assigning reviewers: R: @Abacn added as fallback since no labels match configuration Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
stop reviewer notifications |
|
Stopping reviewer notifications for this pull request: requested by reviewer. If you'd like to restart, comment |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
...va/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java
Outdated
Show resolved
Hide resolved
|
When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover? Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready. |
| } | ||
|
|
||
| private void notifyFailure(Status status, boolean isFallback, String methodName) { | ||
| if (!status.isOk() && !isFallback && fallback != null) { |
There was a problem hiding this comment.
The javadoc on the class says we fallback only on UNAVAILABLE errors. Based on the code here it looks like we'll fallback on any errors. Is this expected?
https://grpc.io/docs/guides/error/ says network level issues may return UNAVAILABLE or
UNKNOWN or DEADLINE_EXCEEDED. should we include them here?
There was a problem hiding this comment.
I was previously triggering fallback only on UNAVAILABLE but later changed it to non-OK status, but forgot to update the comment. I have now changed the check to look for UNAVAILABLE or
UNKNOWN or DEADLINE_EXCEEDED non-ok status to trigger fallback
I went for a hybrid approach, check both connection state + RPC status. ConnectionState could be transient errors, so we move back to primary as soon as state changes to READY. RPC status can capture server side issues too, like backend not responding (for instance requests getting rejected by security policies, there could be other reasons too). For this I've used longer cooling period before we re-try primary. WDYT? |
| * primary channel becomes READY again. | ||
| * <li><b>RPC Failover:</b> If primary channel RPC fails with transient errors ({@link | ||
| * Status.Code#UNAVAILABLE}, {@link Status.Code#DEADLINE_EXCEEDED}, or {@link | ||
| * Status.Code#UNKNOWN}), switches to fallback channel and waits for a 1-hour cooling period |
There was a problem hiding this comment.
unless channel goes through unhealthy->healthy connectivity transition?
Want to make sure some race where we observe an rpc failure before we observe the connectivity failure doesn't cause us to stop using the primary channel if it reestablishes quickly.
| registerPrimaryStateChangeListener(); | ||
| } | ||
|
|
||
| // Test-only. |
There was a problem hiding this comment.
how about removing this one then? The test can have a helper in itself that calls forTest below with default creds and time supplier
|
|
||
| private FailoverChannel( | ||
| ManagedChannel primary, | ||
| @Nullable ManagedChannel fallback, |
There was a problem hiding this comment.
can we just not support null here? It seems the caller could just use primary without creating a FailoverChannel if they don't want to support fallback, and then we don't have to complicate the code with it possibly being null.
| } | ||
|
|
||
| private boolean shouldFallBackDueToPrimaryState() { | ||
| ConnectivityState connectivityState = primary.getState(true); |
There was a problem hiding this comment.
passing true sounds like it might block attempting to connect if in the idle state. How about passing false and treating IDLE as not something that needs to be falled back from.
Or could we just remove this if we are anyway setting up a change listener to observe it's changes?
| private boolean shouldFallbackBasedOnRPCStatus(Status status) { | ||
| switch (status.getCode()) { | ||
| case UNAVAILABLE: | ||
| case DEADLINE_EXCEEDED: |
There was a problem hiding this comment.
I'm worried that DEADLINE_EXCEEDED might occur for other reasons too.
One idea might be to see if the call had any responses, in that case we know that it was at some point connected to the backend and we could choose not to fallback.
We could also perhaps wait for several continuously failed rpcs or failing rpcs for some elapsed time period before falling back.
| private final AtomicLong lastRPCFallbackTimeNanos = new AtomicLong(0); | ||
| private final AtomicLong primaryNotReadySinceNanos = new AtomicLong(-1); | ||
| private final LongSupplier nanoClock; | ||
| private final AtomicBoolean stateChangeListenerRegistered = new AtomicBoolean(false); |
There was a problem hiding this comment.
can we move all the Atomics into a State object that we synchronize? we have long-lived calls so I don't think we have to worry about the performance of synchronized block versus atomic in the call creation path as long as we are not doing any blocking stuff within it.
I think it will help keep the code simpler and we don't have to worry about possible weird states races could put us in.
| return currentTimeNanos - primaryNotReadySinceNanos.get() > PRIMARY_NOT_READY_WAIT_NANOS; | ||
| } | ||
|
|
||
| private void notifyFailure(Status status, boolean isFallback, String methodName) { |
There was a problem hiding this comment.
nit: notifyCallDone? we call it on success too
| super.start( | ||
| new SimpleForwardingClientCallListener<RespT>(responseListener) { | ||
| @Override | ||
| public void onClose(Status status, Metadata trailers) { |
There was a problem hiding this comment.
here is where I was wondering could we hook into onMessage or onHeaders to determine that the call did make some progress before possibly failing due to deadline or unavailable (which coudl possibly be from the backend status)
| currentFlowControlSettings), | ||
| currentFlowControlSettings.getOnReadyThresholdBytes()); | ||
| ManagedChannel primaryChannel = | ||
| IsolationChannel.create( |
There was a problem hiding this comment.
since it's being setup this way IsolationChannel connectivity callbacks are going to be what is used. I'm not sure how that will work since it internally has multiple channels. Looking it seems just has the default ManagedChannel implementation which throws unimplemented exception.
What about having IsolationChannel on top of fallback channels? That seems simpler to me since IsolationChannel just internally creates the separate channels and otherwise doesn't do much than forward things on.
It would be good to have a unit test of whatever setup we do use so that we flush out the issues there instead of requiring an integration test.
Adds a FailoverChannel wrapper class on top of IsolationChannels to maintain primary channel and failover channel and fallback to failover channel if connectivity over primary channel cannot be established. The primary channel will again be retried after a period.