Conversation
Add save synchronization functionality that allows users to backup and sync their game saves to cloud storage. The feature includes: - Multiple storage slots for organizing saves - Automatic hash-based change detection to minimize transfers - Secure file upload/download with authentication - Conflict resolution and file deletion handling - User interface for slot management and manual sync control - Integration with existing authentication system New DTO classes define the data structures for API communication, and StorageService handles HTTP operations including multipart file uploads. The SaveSyncFeature provides UI dialogs and manages the sync lifecycle.
Prevent overwriting the currently open file in the editor by checking if the download path matches the main application's current file path. This avoids potential data loss and conflicts when syncing saves.
The settings file should not be included in the synced save files as it contains local client preferences and configurations that should not be synchronized across devices.
The settings file was missing from the list of files to synchronize, which could cause configuration mismatches between devices. Adding it ensures user preferences are also backed up and restored.
…ions - Remove mods directory from synchronization to avoid syncing mod files - Add logging for file deletion and upload operations to improve debugging visibility - Remove redundant debug log for sync response
Add logic to retrieve the previous sync time from persistent settings and include it in the sync payload. Also update the setting with the current time after a successful sync operation.
If an exception occurs while reading the last sync time from settings, reset the value to zero to prevent future failures.
Prevent automatic cloud synchronization for new users to avoid unexpected behavior. Users can manually enable the feature if desired.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdds a cloud-backed save synchronization feature: new SaveSyncFeature, StorageService client, multiple DTOs, SHA-256 file hashing util, DELETE HTTP helpers, Jackson JavaTimeModule, tightened image URL validation, and mod version bump to v4.46.0-v8. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant SaveSync as SaveSyncFeature
participant FS as FileSystem
participant Storage as StorageService
participant Auth as AuthService
participant API as Remote API
User->>SaveSync: Enable feature / Trigger sync
SaveSync->>FS: Enumerate files & compute SHA-256
FS-->>SaveSync: File list with hashes
SaveSync->>Storage: syncSlot(slotId, SyncSlotDto)
Storage->>Auth: Ensure valid token
Auth-->>Storage: Token
Storage->>API: POST /slots/{id}/sync (client files, lastSync)
API-->>Storage: SyncSlotResponseDto (downloads, missingHashes, extraHashes)
SaveSync->>Storage: uploadFile(file) for each missing hash
Storage->>Auth: Ensure valid token
Storage->>API: multipart POST /upload (file + sha256)
API-->>Storage: Upload confirmation
SaveSync->>API: Download files from provided URLs
API-->>FS: File bytes
SaveSync->>FS: Delete extra files
SaveSync->>SaveSync: Update lastSync in settings
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (1)
src/mindustrytool/features/savesync/dto/SyncSlotDto.java (1)
8-12: Remove@Datato align with DTO package conventions.
SyncSlotDtois the only DTO in the save-sync package using Lombok@Data, while all other DTOs (7 instances) use plain public fields without annotations. Keeping@Datawith public fields generates redundant accessors. Either remove@Datato match the established package style, or make fields private and keep@Datafor consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mindustrytool/features/savesync/dto/SyncSlotDto.java` around lines 8 - 12, Remove the Lombok `@Data` annotation from the SyncSlotDto class to match the established DTO style in the savesync package: open the SyncSlotDto class, delete the `@Data` annotation above the class declaration, and keep the public fields lastSync and clientFiles as-is (Instant lastSync and List<ClientFileDto> clientFiles); alternatively, if you prefer to retain Lombok, make those fields private and keep `@Data`, but prefer removing `@Data` to follow existing package conventions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/mindustrytool/features/auth/AuthHttp.java`:
- Around line 29-35: The AuthRequest.submit logic currently treats every non-GET
as POST causing AuthHttp.delete(...) to send POST; update the submit method in
AuthRequest to explicitly handle HttpMethod.DELETE by adding a branch similar to
the existing POST branch that calls Http.delete(...) with the same parameters
and headers/exception handling you use for POST, while preserving the existing
GET and POST paths (refer to HttpMethod.GET, HttpMethod.DELETE and the
AuthRequest.submit method).
In `@src/mindustrytool/features/savesync/SaveSyncFeature.java`:
- Around line 104-106: getFile(...) currently returns
Vars.dataDirectory.child(path) without validation, allowing directory traversal;
update getFile (and similar usages referenced around lines 429-436 and 457-459)
to normalize and validate the resolved path: resolve the child Fi, compute its
absolute/canonical path and the canonical path of Vars.dataDirectory, and reject
or throw if the resolved path is not a descendant (i.e. does not startWith) the
base data directory; return the validated Fi only when containment is confirmed
and otherwise handle it as an invalid request.
- Around line 109-119: The exit listener is being registered unconditionally in
init(), so sync-on-exit still fires when the feature is disabled; to fix, create
a field for the ApplicationListener (e.g., exitListener) instead of adding it
directly in init(), register it from onEnable() via
Core.app.addListener(exitListener) and unregister it from onDisable() via
Core.app.removeListener(exitListener), or alternatively add a guard at the start
of the listener's exit() that checks the feature enabled state before calling
performSyncOnExit(slotId); reference the init(), onEnable(), onDisable(),
performSyncOnExit(), SETTING_SLOT_ID, and AuthService.getInstance().isLoggedIn()
symbols when making the change.
- Around line 373-375: The setting SETTING_LAST_SYNC is written immediately
after calling processDownloads(response, dialog, cont) so it persists before the
async download/delete pipeline completes; move the
Core.settings.put(SETTING_LAST_SYNC, System.currentTimeMillis()) call into the
completion/final-success path of the async pipeline (for example inside the
success branch/callback of processDownloads or the continuation
Runnable/consumer invoked after deletes/downloads finish) so it only updates
when processDownloads and any downstream delete operations have completed
successfully; reference processDownloads(response, dialog, cont), the
continuation handler `cont`, and SETTING_LAST_SYNC to locate where to relocate
the write.
- Around line 454-459: The code is treating response.extraHashes as file paths
and calling getFile(path).delete(), which will delete wrong files when hashes !=
paths; instead, map each hash to its actual local path before deleting (e.g.,
use the existing index/lookup method such as a local hash->path map or a helper
like getPathForHash(hash) or SaveSyncIndex.lookup(hash)) and only delete the
resolved file; if no mapping exists, skip deletion and log the unknown hash so
you don't delete by raw hash string. Ensure you update the loop that references
response.extraHashes and replace getFile(path).delete() with a
resolve-then-delete flow using the proper lookup function (or adjust to use
response.extraPaths if the server should send paths).
In `@src/mindustrytool/features/savesync/StorageService.java`:
- Around line 50-55: The current use of
AuthService.getInstance().refreshTokenIfNeeded().thenRun(...) can leave the
CompletableFuture `future` unresolved if the refresh fails; change the call to
whenComplete((v, ex) -> { if (ex != null) future.completeExceptionally(ex); else
{ /* existing AuthHttp.delete/... submit logic */ } }) so refresh failures are
propagated into `future` instead of being dropped; apply the same pattern to the
other occurrence around lines 63-68 (the other delete/submit block) to ensure
both delete methods always complete `future` either normally or exceptionally.
- Line 64: The DELETE request currently concatenates the raw path into the URL
(AuthHttp.delete(BASE_URL + "slots/" + slotId + "/files?path=" + path)), which
breaks for spaces and reserved characters; encode the path parameter before
constructing the URL (e.g., use java.net.URLEncoder.encode(path,
StandardCharsets.UTF_8) or equivalent) and use the encoded value in the
AuthHttp.delete call, keeping BASE_URL and slotId as-is and adding the required
import for the encoder.
- Around line 106-170: The multipart upload in StorageService (the block using
HttpURLConnection httpConn, OutputStream outputStream, PrintWriter writer, and
FileInputStream inputStream) must set connection and read timeouts on httpConn
(e.g., setConnectTimeout and setReadTimeout) and must close all I/O via
try-with-resources to avoid leaks; refactor the upload to open the
HttpURLConnection, wrap the OutputStream, PrintWriter, and FileInputStream in
try-with-resources, read the error stream with try-with-resources
(BufferedReader/InputStreamReader) and ensure httpConn.disconnect() in a finally
block (or use try/finally) so httpConn, outputStream, writer, inputStream and
any errorStream are always closed even on exceptions.
In `@src/mindustrytool/ui/NetworkImage.java`:
- Line 44: The URL validator currently allows query strings but draw() still
checks file extensions with endsWith(...) and blindly appends "?format=jpeg",
causing rejects and malformed URLs; update the draw() method in
NetworkImage.java to extract the path or use java.net.URI/java.net.URL to
determine the extension (ignore query/fragments) and to append the format
parameter using '&' when a query already exists (or rebuild the query properly)
so extension checks and the constructed fetch URL are consistent with the
regex-based validator.
---
Nitpick comments:
In `@src/mindustrytool/features/savesync/dto/SyncSlotDto.java`:
- Around line 8-12: Remove the Lombok `@Data` annotation from the SyncSlotDto
class to match the established DTO style in the savesync package: open the
SyncSlotDto class, delete the `@Data` annotation above the class declaration, and
keep the public fields lastSync and clientFiles as-is (Instant lastSync and
List<ClientFileDto> clientFiles); alternatively, if you prefer to retain Lombok,
make those fields private and keep `@Data`, but prefer removing `@Data` to follow
existing package conventions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 07b09262-a19d-4dea-b810-6935973b2b8a
📒 Files selected for processing (16)
build.gradlemod.hjsonsrc/mindustrytool/Main.javasrc/mindustrytool/Utils.javasrc/mindustrytool/features/auth/AuthHttp.javasrc/mindustrytool/features/savesync/SaveSyncFeature.javasrc/mindustrytool/features/savesync/StorageService.javasrc/mindustrytool/features/savesync/dto/CheckHashesDto.javasrc/mindustrytool/features/savesync/dto/CheckHashesResponseDto.javasrc/mindustrytool/features/savesync/dto/ClientFileDto.javasrc/mindustrytool/features/savesync/dto/CreateStorageSlotDto.javasrc/mindustrytool/features/savesync/dto/DownloadDto.javasrc/mindustrytool/features/savesync/dto/StorageSlotDto.javasrc/mindustrytool/features/savesync/dto/SyncSlotDto.javasrc/mindustrytool/features/savesync/dto/SyncSlotResponseDto.javasrc/mindustrytool/ui/NetworkImage.java
📜 Review details
🔇 Additional comments (12)
mod.hjson (1)
56-56: Version bump looks aligned with this release scope.No issues in this change.
build.gradle (1)
46-46: Good dependency addition for Java time serialization support.This aligns with the new
Instant-based DTO payloads.src/mindustrytool/features/savesync/dto/CheckHashesResponseDto.java (1)
5-7: DTO shape is clean and appropriate for response payloads.src/mindustrytool/features/savesync/dto/ClientFileDto.java (1)
5-16: Client file DTO looks good for sync payload composition.src/mindustrytool/features/savesync/dto/CheckHashesDto.java (1)
5-12: Request DTO is straightforward and fits the API contract.src/mindustrytool/Utils.java (2)
47-50: ObjectMapper date/time configuration looks correct for the new sync DTO flow.The mapper setup is consistent with the PR’s Java time handling changes.
230-251: SHA-256 implementation is solid and stream-based.Good use of
DigestInputStreamto avoid loading full files into memory.src/mindustrytool/Main.java (1)
60-60: SaveSync feature registration is correctly wired.Import and registration are aligned with the existing feature bootstrap sequence.
Also applies to: 118-118
src/mindustrytool/features/savesync/dto/DownloadDto.java (1)
5-10: DTO shape is clear and fit-for-purpose.This is a clean transport model for server-provided download entries.
src/mindustrytool/features/savesync/dto/CreateStorageSlotDto.java (1)
3-11: Create-slot DTO is minimal and correctly structured.No concerns with this model class.
src/mindustrytool/features/savesync/dto/SyncSlotResponseDto.java (1)
5-8: Sync response DTO fields are clear and aligned with sync stages.src/mindustrytool/features/savesync/dto/StorageSlotDto.java (1)
5-11: Storage slot DTO is clean and adequate for API transport.
| public static void delete(String url, ConsT<HttpResponse, Exception> success, Cons<Throwable> failure) { | ||
| delete(url).submit(success, failure); | ||
| } | ||
|
|
||
| public static AuthRequest delete(String url) { | ||
| return new AuthRequest(url, HttpMethod.DELETE); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Get the full AuthHttp.java file to understand structure
wc -l src/mindustrytool/features/auth/AuthHttp.javaRepository: MindustryTool/MindustryToolMod
Length of output: 123
🏁 Script executed:
#!/bin/bash
# Look for AuthRequest class definition and submit method
rg -n 'class AuthRequest|public.*submit|HttpMethod|Http\.post|Http\.get|Http\.delete' src/mindustrytool/features/auth/AuthHttp.java -A 2Repository: MindustryTool/MindustryToolMod
Length of output: 1355
🏁 Script executed:
#!/bin/bash
# Find where AuthRequest is defined (might be in same file or separate)
find . -name "*.java" -type f | xargs rg -l "class AuthRequest" 2>/dev/null | head -5Repository: MindustryTool/MindustryToolMod
Length of output: 121
🏁 Script executed:
#!/bin/bash
# Read the submit() method completely to confirm the routing logic
sed -n '64,85p' src/mindustrytool/features/auth/AuthHttp.javaRepository: MindustryTool/MindustryToolMod
Length of output: 955
delete() sends POST requests instead of DELETE.
The delete() method creates an AuthRequest with HttpMethod.DELETE (lines 34–35), but the submit() method only routes based on if (method == HttpMethod.GET), falling through to Http.post(...) for all non-GET requests (lines 73–77). Add explicit handling for HttpMethod.DELETE to use Http.delete(...) instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/features/auth/AuthHttp.java` around lines 29 - 35, The
AuthRequest.submit logic currently treats every non-GET as POST causing
AuthHttp.delete(...) to send POST; update the submit method in AuthRequest to
explicitly handle HttpMethod.DELETE by adding a branch similar to the existing
POST branch that calls Http.delete(...) with the same parameters and
headers/exception handling you use for POST, while preserving the existing GET
and POST paths (refer to HttpMethod.GET, HttpMethod.DELETE and the
AuthRequest.submit method).
| public Fi getFile(String path) { | ||
| return Vars.dataDirectory.child(path); | ||
| } |
There was a problem hiding this comment.
Validate and constrain sync file paths to prevent directory traversal.
Server-provided paths are written/deleted via getFile(...) without containment checks. A crafted path like ../... can escape Vars.dataDirectory.
Suggested fix
public Fi getFile(String path) {
- return Vars.dataDirectory.child(path);
+ try {
+ File base = Vars.dataDirectory.file().getCanonicalFile();
+ File target = new File(base, path).getCanonicalFile();
+ String basePath = base.getPath() + File.separator;
+ if (!target.getPath().startsWith(basePath)) {
+ throw new IllegalArgumentException("Invalid sync path: " + path);
+ }
+ return new Fi(target);
+ } catch (IOException e) {
+ throw new RuntimeException("Failed to resolve sync path: " + path, e);
+ }
}Also applies to: 429-436, 457-459
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/features/savesync/SaveSyncFeature.java` around lines 104 -
106, getFile(...) currently returns Vars.dataDirectory.child(path) without
validation, allowing directory traversal; update getFile (and similar usages
referenced around lines 429-436 and 457-459) to normalize and validate the
resolved path: resolve the child Fi, compute its absolute/canonical path and the
canonical path of Vars.dataDirectory, and reject or throw if the resolved path
is not a descendant (i.e. does not startWith) the base data directory; return
the validated Fi only when containment is confirmed and otherwise handle it as
an invalid request.
| processDownloads(response, dialog, cont); | ||
| Core.settings.put(SETTING_LAST_SYNC, System.currentTimeMillis()); | ||
| } |
There was a problem hiding this comment.
lastSync is persisted too early in the async pipeline.
SETTING_LAST_SYNC is updated before downloads/deletes finish. If those stages fail, next sync may skip pending changes.
Suggested fix
- processDownloads(response, dialog, cont);
- Core.settings.put(SETTING_LAST_SYNC, System.currentTimeMillis());
+ processDownloads(response, dialog, cont); private void processDeletes(SyncSlotResponseDto response, BaseDialog dialog, Table cont) {
...
cont.button("Close", dialog::hide).size(100, 50).padTop(10);
+ Core.settings.put(SETTING_LAST_SYNC, System.currentTimeMillis());
// Update initialFilePaths after successful sync
initialFilePaths.clear();
initialFilePaths.addAll(listFilePaths());
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/features/savesync/SaveSyncFeature.java` around lines 373 -
375, The setting SETTING_LAST_SYNC is written immediately after calling
processDownloads(response, dialog, cont) so it persists before the async
download/delete pipeline completes; move the
Core.settings.put(SETTING_LAST_SYNC, System.currentTimeMillis()) call into the
completion/final-success path of the async pipeline (for example inside the
success branch/callback of processDownloads or the continuation
Runnable/consumer invoked after deletes/downloads finish) so it only updates
when processDownloads and any downstream delete operations have completed
successfully; reference processDownloads(response, dialog, cont), the
continuation handler `cont`, and SETTING_LAST_SYNC to locate where to relocate
the write.
| AuthService.getInstance().refreshTokenIfNeeded().thenRun(() -> { | ||
| AuthHttp.delete(BASE_URL + "slots/" + id) | ||
| .header("Authorization", "Bearer " + AuthService.getInstance().getAccessToken()) | ||
| .error(future::completeExceptionally) | ||
| .submit(res -> future.complete(null)); | ||
| }); |
There was a problem hiding this comment.
Delete methods can return never-completing futures when token refresh fails.
thenRun(...) is only invoked on successful refresh. If refresh fails, future is never completed exceptionally.
Suggested fix
public static CompletableFuture<Void> deleteSlot(String id) {
CompletableFuture<Void> future = new CompletableFuture<>();
- AuthService.getInstance().refreshTokenIfNeeded().thenRun(() -> {
- AuthHttp.delete(BASE_URL + "slots/" + id)
- .header("Authorization", "Bearer " + AuthService.getInstance().getAccessToken())
- .error(future::completeExceptionally)
- .submit(res -> future.complete(null));
- });
+ AuthHttp.delete(BASE_URL + "slots/" + id)
+ .submit(res -> future.complete(null), future::completeExceptionally);
return future;
}
public static CompletableFuture<Void> deleteFile(String slotId, String path) {
Log.info("Delete: " + path);
CompletableFuture<Void> future = new CompletableFuture<>();
- AuthService.getInstance().refreshTokenIfNeeded().thenRun(() -> {
- AuthHttp.delete(BASE_URL + "slots/" + slotId + "/files?path=" + path)
- .header("Authorization", "Bearer " + AuthService.getInstance().getAccessToken())
- .error(future::completeExceptionally)
- .submit(res -> future.complete(null));
- });
+ AuthHttp.delete(BASE_URL + "slots/" + slotId + "/files?path=" + path)
+ .submit(res -> future.complete(null), future::completeExceptionally);
return future;
}Also applies to: 63-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/features/savesync/StorageService.java` around lines 50 -
55, The current use of
AuthService.getInstance().refreshTokenIfNeeded().thenRun(...) can leave the
CompletableFuture `future` unresolved if the refresh fails; change the call to
whenComplete((v, ex) -> { if (ex != null) future.completeExceptionally(ex); else
{ /* existing AuthHttp.delete/... submit logic */ } }) so refresh failures are
propagated into `future` instead of being dropped; apply the same pattern to the
other occurrence around lines 63-68 (the other delete/submit block) to ensure
both delete methods always complete `future` either normally or exceptionally.
| HttpURLConnection httpConn = null; | ||
| try { | ||
| URL url = new URL(BASE_URL + "upload"); | ||
| httpConn = (HttpURLConnection) url.openConnection(); | ||
| httpConn.setUseCaches(false); | ||
| httpConn.setDoOutput(true); | ||
| httpConn.setDoInput(true); | ||
| httpConn.setRequestMethod("POST"); | ||
| httpConn.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary); | ||
| String token = AuthService.getInstance().getAccessToken(); | ||
|
|
||
| if (token == null) { | ||
| throw new RuntimeException("No access token available"); | ||
| } | ||
|
|
||
| httpConn.setRequestProperty("Authorization", "Bearer " + token); | ||
|
|
||
| OutputStream outputStream = httpConn.getOutputStream(); | ||
| PrintWriter writer = new PrintWriter(new OutputStreamWriter(outputStream, StandardCharsets.UTF_8), | ||
| true); | ||
|
|
||
| String hash = Utils.sha256(file.file()); | ||
| writer.append("--" + boundary).append(LINE_FEED); | ||
| writer.append("Content-Disposition: form-data; name=\"hash\"").append(LINE_FEED); | ||
| writer.append(LINE_FEED); | ||
| writer.append(hash).append(LINE_FEED); | ||
|
|
||
| String fileName = file.name(); | ||
| writer.append("--" + boundary).append(LINE_FEED); | ||
| writer.append("Content-Disposition: form-data; name=\"file\"; filename=\"" + fileName + "\"") | ||
| .append(LINE_FEED); | ||
| writer.append("Content-Type: application/octet-stream").append(LINE_FEED); | ||
| writer.append(LINE_FEED); | ||
| writer.flush(); | ||
|
|
||
| FileInputStream inputStream = new FileInputStream(file.file()); | ||
| byte[] buffer = new byte[4096]; | ||
| int bytesRead; | ||
| while ((bytesRead = inputStream.read(buffer)) != -1) { | ||
| outputStream.write(buffer, 0, bytesRead); | ||
| } | ||
| outputStream.flush(); | ||
| inputStream.close(); | ||
|
|
||
| writer.append(LINE_FEED); | ||
| writer.flush(); | ||
| writer.append("--" + boundary + "--").append(LINE_FEED); | ||
| writer.close(); | ||
|
|
||
| int status = httpConn.getResponseCode(); | ||
| if (status != HttpURLConnection.HTTP_OK && status != HttpURLConnection.HTTP_CREATED) { | ||
| // Read error stream | ||
| InputStream errorStream = httpConn.getErrorStream(); | ||
| if (errorStream != null) { | ||
| BufferedReader reader = new BufferedReader(new InputStreamReader(errorStream)); | ||
| StringBuilder response = new StringBuilder(); | ||
| String line; | ||
| while ((line = reader.readLine()) != null) { | ||
| response.append(line); | ||
| } | ||
| reader.close(); | ||
| throw new IOException("Server returned non-OK status: " + status + " " + response.toString()); | ||
| } | ||
| throw new IOException("Server returned non-OK status: " + status); | ||
| } |
There was a problem hiding this comment.
Multipart upload should set timeouts and close streams via try-with-resources.
Current code can hang indefinitely on network stalls and leak I/O resources when exceptions occur mid-upload.
Suggested fix
httpConn = (HttpURLConnection) url.openConnection();
httpConn.setUseCaches(false);
httpConn.setDoOutput(true);
httpConn.setDoInput(true);
httpConn.setRequestMethod("POST");
+httpConn.setConnectTimeout(10000);
+httpConn.setReadTimeout(30000);
httpConn.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary);
...
-OutputStream outputStream = httpConn.getOutputStream();
-PrintWriter writer = new PrintWriter(new OutputStreamWriter(outputStream, StandardCharsets.UTF_8), true);
+try (OutputStream outputStream = httpConn.getOutputStream();
+ PrintWriter writer = new PrintWriter(new OutputStreamWriter(outputStream, StandardCharsets.UTF_8), true);
+ InputStream inputStream = new FileInputStream(file.file())) {
...
- FileInputStream inputStream = new FileInputStream(file.file());
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
outputStream.flush();
- inputStream.close();
...
writer.close();
+}
...
-InputStream errorStream = httpConn.getErrorStream();
-if (errorStream != null) {
- BufferedReader reader = new BufferedReader(new InputStreamReader(errorStream));
+InputStream errorStream = httpConn.getErrorStream();
+if (errorStream != null) {
+ try (BufferedReader reader = new BufferedReader(new InputStreamReader(errorStream, StandardCharsets.UTF_8))) {
StringBuilder response = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
response.append(line);
}
- reader.close();
throw new IOException("Server returned non-OK status: " + status + " " + response.toString());
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/features/savesync/StorageService.java` around lines 106 -
170, The multipart upload in StorageService (the block using HttpURLConnection
httpConn, OutputStream outputStream, PrintWriter writer, and FileInputStream
inputStream) must set connection and read timeouts on httpConn (e.g.,
setConnectTimeout and setReadTimeout) and must close all I/O via
try-with-resources to avoid leaks; refactor the upload to open the
HttpURLConnection, wrap the OutputStream, PrintWriter, and FileInputStream in
try-with-resources, read the error stream with try-with-resources
(BufferedReader/InputStreamReader) and ensure httpConn.disconnect() in a finally
block (or use try/finally) so httpConn, outputStream, writer, inputStream and
any errorStream are always closed even on exceptions.
|
|
||
| public static boolean isValidImageLink(String url) { | ||
| return url.matches("https?://.*\\.(png|jpg|jpeg)"); | ||
| return url.matches("^https?://[^?\\s]+\\.(png|jpg|jpeg)(\\?.*)?$"); |
There was a problem hiding this comment.
Accepted URLs can still fail to load due to inconsistent query handling.
Line 44 now accepts image URLs with query params, but draw() still rejects them via endsWith(...) checks and can build malformed fetch URLs by always appending ?format=jpeg (instead of &format=jpeg when a query already exists).
Proposed fix
public static boolean isValidImageLink(String url) {
- return url.matches("^https?://[^?\\s]+\\.(png|jpg|jpeg)(\\?.*)?$");
+ return url != null && url.matches("(?i)^https?://[^?\\s]+\\.(png|jpe?g)(\\?.*)?$");
}
+
+private static boolean hasSupportedImageExtension(String imageUrl) {
+ String path = imageUrl.split("\\?", 2)[0].toLowerCase();
+ return path.endsWith(".png") || path.endsWith(".jpg") || path.endsWith(".jpeg");
+}
...
- if (!url.endsWith("png") && !url.endsWith("jpg") && !url.endsWith("jpeg")) {
+ if (!hasSupportedImageExtension(url)) {
return;
}
...
- Http.get(url + "?format=jpeg")
+ Http.get(url + (url.contains("?") ? "&format=jpeg" : "?format=jpeg"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mindustrytool/ui/NetworkImage.java` at line 44, The URL validator
currently allows query strings but draw() still checks file extensions with
endsWith(...) and blindly appends "?format=jpeg", causing rejects and malformed
URLs; update the draw() method in NetworkImage.java to extract the path or use
java.net.URI/java.net.URL to determine the extension (ignore query/fragments)
and to append the format parameter using '&' when a query already exists (or
rebuild the query properly) so extension checks and the constructed fetch URL
are consistent with the regex-based validator.
Replace conditional GET/POST branching with unified Http.request() call. Content is now conditionally added only when present, improving code clarity.
The loadLocales import is no longer used in the SaveSyncFeature class, so it has been removed to keep the code clean and avoid unnecessary dependencies.
The loadLocales import is no longer used in the SaveSyncFeature class, so it has been removed to keep the code clean and avoid unnecessary dependencies.
Summary by CodeRabbit
New Features
Bug Fixes
Chores