-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@tus/server: add GCS locker #616
base: main
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting in the time to contribute!
I'm not an expert on (distributed) locking, but conceptually I think GCS storage as a locker only makes sense if you're already deploying your server within GCS infrastructure (so it's faster) and you have a bucket in the region where the uploads happen. My assumption is if those conditions aren't met, things will be slow? AFAIK GCS has strong consistency within the same region but eventual consistency for multi-region.
Maybe you can elaborate on your use case?
Indeed I haven't even thought about using this locker with a store other than GCS. In my case, the storage bucket and the locker bucket is the same, and I think the only case they should be separated is when the storage bucket is not in standard storage class. Anyways, I'm not sure i.e. Firestore would greatly overperform GCS in case of different storage. Regarding region latency, the user should be aware of that and choose a suitable region. Of course a redis based implementation would be much better, but this may be a considerable alternative until thats not implemented. Shall I move this locker to the gcs-store package to suggest the primary application? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting because such approaches would allow tus server to implement lockers directly on top of cloud storages instead of using external tools like Redis. However, I would like to see some evidence that this approach actually provides exclusive access to uploads. Is there some blog post that looked into the mechanisms at play here? Are all involved operations strongly consistent?
GCS is strongly consistent, but indeed concurrency was not ensured in my previous approach. I have reworked the code based on this article. Note that I had to upgrade @google-cloud/storage because previous version was missing a type export. Also, this feature should be moved to a separate package or into gcs-store, as I'm importing from @google-cloud/storage. |
Really nice article, thanks for sharing. It does also say this:
But here we are using it for individual uploads, not batches. Or even smaller with a resumed uploads (or where a client sets |
For the last 10 days it has been running in production without problems. We have about 5000 uploads per day. In e2e tests it was indeed slightly slower for 140 files compared to xhr, but I could easily compensate this by increasing the number of parallel uploads. If I measure individual uploads, the time elapsed between lock and unlock is mostly 20-400 ms in case of memory locker, and 300-400 for gcs locker. |
That's great to hear! I'm in favor adding this into the package then. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks very good! Also happy with the extensive code comments.
Some things needed:
- The build is currently failing
- We need to update the
peerDependencies
to not allow any version of@google-cloud/storage
. - Docs. We should also talk about when to (not) use this lock and the things to watch out for, such as what values to set for the ttl and watch interval.
- A test similar to this:
tus-node-server/test/e2e.test.ts
Lines 1045 to 1145 in a0f9da1
describe('File Store with Locking', () => { before(() => { server = new Server({ path: STORE_PATH, datastore: new FileStore({directory: `./${STORE_PATH}`}), locker: new MemoryLocker(), }) listener = server.listen() agent = request.agent(listener) }) after((done) => { // Remove the files directory rimraf(FILES_DIRECTORY, (err) => { if (err) { return done(err) } // Clear the config // @ts-expect-error we can consider a generic to pass to // datastore to narrow down the store type const uploads = (server.datastore.configstore as Configstore).list?.() ?? [] for (const upload in uploads) { // @ts-expect-error we can consider a generic to pass to // datastore to narrow down the store type await(server.datastore.configstore as Configstore).delete(upload) } listener.close() return done() }) }) it('will allow another request to acquire the lock by cancelling the previous request', async () => { const res = await agent .post(STORE_PATH) .set('Tus-Resumable', TUS_RESUMABLE) .set('Upload-Length', TEST_FILE_SIZE) .set('Upload-Metadata', TEST_METADATA) .set('Tus-Resumable', TUS_RESUMABLE) .expect(201) assert.equal('location' in res.headers, true) assert.equal(res.headers['tus-resumable'], TUS_RESUMABLE) // Save the id for subsequent tests const file_id = res.headers.location.split('/').pop() const file_size = parseInt(TEST_FILE_SIZE, 10) // Slow down writing const originalWrite = server.datastore.write.bind(server.datastore) sinon.stub(server.datastore, 'write').callsFake((stream, ...args) => { const throttleStream = new Throttle({bps: file_size / 4}) return originalWrite(stream.pipe(throttleStream), ...args) }) const data = Buffer.alloc(parseInt(TEST_FILE_SIZE, 10), 'a') const httpAgent = new Agent({ maxSockets: 2, maxFreeSockets: 10, timeout: 10000, keepAlive: true, }) const createPatchReq = (offset: number) => { return agent .patch(`${STORE_PATH}/${file_id}`) .agent(httpAgent) .set('Tus-Resumable', TUS_RESUMABLE) .set('Upload-Offset', offset.toString()) .set('Content-Type', 'application/offset+octet-stream') .send(data.subarray(offset)) } const req1 = createPatchReq(0).then((e) => e) await wait(100) const req2 = agent .head(`${STORE_PATH}/${file_id}`) .agent(httpAgent) .set('Tus-Resumable', TUS_RESUMABLE) .expect(200) .then((e) => e) const [res1, res2] = await Promise.allSettled([req1, req2]) assert.equal(res1.status, 'fulfilled') assert.equal(res2.status, 'fulfilled') assert.equal(res1.value.statusCode, 400) assert.equal(res1.value.headers['upload-offset'] !== TEST_FILE_SIZE, true) assert.equal(res2.value.statusCode, 200) // Verify that we are able to resume even if the first request // was cancelled by the second request trying to acquire the lock const offset = parseInt(res2.value.headers['upload-offset'], 10) const finishedUpload = await createPatchReq(offset) assert.equal(finishedUpload.statusCode, 204) assert.equal(finishedUpload.headers['upload-offset'], TEST_FILE_SIZE) }).timeout(20000) }) })
If you need help with any of these let me know.
Thank you for the article, I will have a look at it! I am wondering if S3 has similar capabilities and a locker can be implemented nowadays ontop of it as well. |
@netdown still interested in getting this over the finish line? |
Yes, but I've been busy the last few weeks and I expect the same at least until July. Feel free to complete the PR if you have the time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for my delayed review! I just read the accompanying blog post and wanted to leave some comments about it first. Some additional background information can be found at gcslock, which was a previous GCS-based lock. The only valuable comment online I was able to find about the proposed algorithm is on Lobsters by Aphyr, who is quite experienced in testing distributed systems and databases. However, his comment was more about a general issue with distributed locks and not about this GCS-based approach in particular. The same critique can also be applied to Redis-based locks and there is not much we can do on our end as far as I know.
The proposed algorithm on its own seems sound to me (although I am no expert). It relies on the storage offering strong consistency which is the case with GCS. While there are many S3-compatible storage I am not aware of any GCS-compatible storages. So we don't have to worry much about storages with a GCS-like interface that are not strongly consistent.
In addition, the propsed algorithm also provides "instant recovery from stale locks" if the lock was left stale by the same actor that now tries to acquire it. This functionality attaches an identity to each lock, which is dangerous for tus-node-server as we do not want two requests that are processed by the same tus-node-server instance to interfere with the same lock. This PR does not implement this feature but this difference to the blog post should still be noted in the code somewhere.
The author also acknowledges that this algorithm does not offer low-latency:
A locking operation's average speed is in the order of hundreds of milliseconds.
This is probably fine for large file uploads, which are I/O-bound, but still work documenting somewhere.
Finally, while reading the article, I hoped that a similar approach might be possible for S3, but this does not seem possible at the first glance as it does not offer conditional writes like GCS does.
//On the first attempt, retry after current I/O operations are done, else use an exponential backoff | ||
const waitFn = (then: () => void) => | ||
attempt > 0 | ||
? setTimeout(then, (attempt * this.locker.watchInterval) / 3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice if it also added random jitter.
* main: (59 commits) Replace demo folder with StackBlitz (tus#704) @tus/gcs-store: correctly pass content type (tus#702) @tus/s3-store: fix zero byte files (tus#700) Update package-lock.json [ci] release (tus#696) fix: handling consistent cancellation across stream and locks (tus#699) @tus/s3-store: Change private modifier into protected (tus#698) Create funding-manifest-urls Bump @aws-sdk/client-s3 from 3.703.0 to 3.717.0 (tus#695) Bump mocha from 10.4.0 to 11.0.1 (tus#693) Bump @biomejs/biome from 1.9.2 to 1.9.4 (tus#694) [ci] release (tus#690) Bump @aws-sdk/client-s3 from 3.701.0 to 3.703.0 (tus#685) @tus/s3-store: fix part number increment (tus#689) Revert "Bump rimraf from 3.0.2 to 6.0.1 (tus#681)" Bump @aws-sdk/client-s3 from 3.682.0 to 3.701.0 (tus#683) Bump @changesets/cli from 2.27.9 to 2.27.10 (tus#682) Bump rimraf from 3.0.2 to 6.0.1 (tus#681) Bump @types/node from 20.11.5 to 22.10.1 (tus#679) Ignore JSON for Biome formatting ...
Update:
- this.currentMetaGeneration = 0
+ this.currentMetaGeneration = (await this.getMeta()).metageneration
Note that in tus-node-server/packages/gcs-store/src/locker/GCSLocker.ts Lines 125 to 134 in b5e0bfb
but inside the tus-node-server/packages/gcs-store/src/locker/GCSLock.ts Lines 40 to 52 in b5e0bfb
From my understanding this wasn't needed and just caused for repetitive calls. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only a few minor changes, otherwise LGTM 👍
* Check if the lock is healthy, delete if not. | ||
* Returns TRUE if the lock is healthy. | ||
*/ | ||
protected async insureHealth() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
protected async insureHealth() { | |
protected async ensureHealth() { |
Could this be a naming mistake? "ensure" seems more appropriate than "insure".
public async create(exp: number) { | ||
const metadata = { | ||
metadata: {exp}, | ||
// TODO: this does nothing? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about this TODO comment?
|
||
if (!isHealthy) { | ||
log('lock not healthy. calling GCSLock.take() again') | ||
return await this.take(cancelHandler) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens when the lock is unhealthy and cannot be taken again? Is access to the upload resources on GCP then taken away? Since the locker cannot ensure exclusive access, saving uploaded data to GCS should be stopped.
await this.deleteReleaseRequest() | ||
await this.lockFile.delete({ifGenerationMatch: this.currentMetaGeneration}) | ||
} catch (err) { | ||
//Probably already deleted, no need to report |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only errors about the object not existing should be ignored. All other errors should be thrown.
protected async deleteReleaseRequest() { | ||
try { | ||
await this.releaseFile.delete() | ||
} catch (err) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only errors about the object not existing should be ignored. All other errors should be thrown.
WalkthroughThis pull request updates dependency versions and enhances the module’s locking mechanism. The changes include version bumps for Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant GCSLocker
participant GCSLockHandler
participant GCSLockFile
Client->>GCSLocker: Request lock acquisition
GCSLocker->>GCSLockHandler: Initiate lock process
GCSLockHandler->>GCSLockFile: Create lock file with expiration
GCSLockFile-->>GCSLockHandler: Return lock status (created/error)
GCSLockHandler-->>GCSLocker: Lock acquired or trigger retry logic
GCSLocker-->>Client: Return lock result
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
packages/gcs-store/src/locker/GCSLock.ts (1)
33-59
: Limit recursion or adopt an iterative approach to avoid potential stack issues.
IfinsureHealth()
repeatedly returnsfalse
, the current code callstake(cancelHandler)
again in a recursive manner, which might risk an unbounded recursion. Consider using a loop or adding a retry counter.public async take(cancelHandler: RequestRelease): Promise<boolean> { try { ... } catch (err) { ... if (!isHealthy) { - return await this.take(cancelHandler) + // Instead of direct recursion, consider a loop or a + // bounded retry mechanism for more robust error handling. + return await this.retryTake(cancelHandler) } ... } }packages/utils/src/models/Upload.ts (1)
36-38
: LGTM! Consider documenting the behavior.The addition of the
null
check makes the code more robust. Consider adding a JSDoc comment to document thatsizeIsDeferred
returnstrue
for bothundefined
andnull
values.+/** + * Returns true if the size is not yet determined (undefined or null). + */ get sizeIsDeferred(): boolean { return this.size === undefined || this.size === null }packages/gcs-store/test/locker.ts (1)
43-69
: Consider adding error case test.While the test covers the happy path, consider adding a test case for when the unlock operation fails.
test/src/e2e.test.ts (1)
872-873
: Consider safer alternatives to non-null assertions.While the non-null assertions work, consider using optional chaining or defensive checks for better type safety.
-files_created.push(file_id!.split('&upload_id')[0]) +files_created.push(file_id?.split('&upload_id')[0] ?? '') -files_created.push(deferred_file_id!.split('&upload_id')[0]) +files_created.push(deferred_file_id?.split('&upload_id')[0] ?? '') -bucket.file(file_id!) +if (!file_id) throw new Error('file_id is required') +bucket.file(file_id)Also applies to: 891-892, 994-995
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (11)
packages/gcs-store/package.json
(2 hunks)packages/gcs-store/src/index.ts
(2 hunks)packages/gcs-store/src/locker/GCSLock.ts
(1 hunks)packages/gcs-store/src/locker/GCSLockFile.ts
(1 hunks)packages/gcs-store/src/locker/GCSLocker.ts
(1 hunks)packages/gcs-store/test/locker.ts
(1 hunks)packages/server/src/handlers/BaseHandler.ts
(1 hunks)packages/utils/src/models/Upload.ts
(1 hunks)test/package.json
(1 hunks)test/src/e2e.test.ts
(6 hunks)tsconfig.base.json
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/server/src/handlers/BaseHandler.ts
🔇 Additional comments (16)
packages/gcs-store/src/locker/GCSLock.ts (3)
76-76
: Rename “insureHealth” to “ensureHealth” for clarity.
This matches [Acconut]'s earlier suggestion regarding naming.
98-117
: Watcher logic looks sound.
The approach is appropriate for maintaining and refreshing the lock.
119-125
: Expiration check logic is clear.
Returningtrue
ifexp
is undefined or outdated is a valid way to mark the lock as expired.packages/gcs-store/src/locker/GCSLockFile.ts (3)
41-41
: Clarify or remove the TODO comment.
Please confirm whether settingcacheControl: 'no-store'
has any practical effect on the lock file.
94-94
: Avoid suppressing all errors here—only ignore “not found” errors.
Currently, all errors are being swallowed, which can hide important issues such as permission errors.
133-133
: Throw other errors while ignoring “not found” when deleting release requests.
This mirrors the same concern about silently discarding all exceptions.packages/gcs-store/src/locker/GCSLocker.ts (1)
1-168
: Distributed lock implementation appears robust and well-structured.
The combination ofPromise.race
for timeouts and incremental backoff for locking provides a solid concurrency mechanism.packages/gcs-store/test/locker.ts (2)
12-41
: LGTM! Good test coverage for lock notification.The test effectively verifies that when a second lock is acquired, the first lock's release callback is triggered while the second lock's callback remains uncalled.
71-107
: LGTM! Good timeout handling test.The test effectively verifies that the lock acquisition is aborted when the abort signal is triggered, and the appropriate error is returned.
packages/gcs-store/src/index.ts (1)
12-12
: LGTM! Clean export addition.The export statement is correctly placed and makes the GCSLocker functionality available to consumers.
test/src/e2e.test.ts (1)
49-53
: LGTM! Improved type safety.The addition of
| undefined
to variable types makes them more accurate as they may be undefined before initialization.tsconfig.base.json (1)
12-13
: Verify if skipLibCheck is necessary.While
skipLibCheck: true
can help resolve type conflicts with dependencies, it also disables type checking of all declaration files (*.d.ts). This might mask potential type issues. Consider if this change is specifically needed for the @google-cloud/storage upgrade or if there's a more targeted solution.Could you share more context on why this compiler option was added? Are there specific type conflicts with the updated @google-cloud/storage package that necessitate this change?
test/package.json (1)
13-13
: LGTM! Version aligns with peer dependencies.The added dependency version (^7.12.0) is compatible with the peer dependency requirements in the gcs-store package.
packages/gcs-store/package.json (3)
30-30
: LGTM! Improved peer dependency specification.Good practice to specify a more precise version range (^7.12.0) instead of the previous wildcard (*).
21-21
: Verify version alignment between dev and peer dependencies.The package specifies different minimum versions:
- devDependencies: ^7.15.0
- peerDependencies: ^7.12.0
While this is technically valid (since ^7.15.0 satisfies ^7.12.0), consider aligning these versions to avoid potential confusion.
Is there a specific reason for using different minimum versions? Consider using ^7.15.0 for both to ensure consistent behavior during development and in production.
Also applies to: 30-30
21-21
: Verify breaking changes in @google-cloud/storage v7.The update from v6 to v7 is a major version bump that may include breaking changes.
✅ Verification successful
🌐 Web query:
What are the breaking changes in @google-cloud/storage v7 compared to v6?
💡 Result:
The transition from
@google-cloud/storage
v6 to v7 introduced several breaking changes that require adjustments in implementation:Key Breaking Changes
Require Syntax Update
The import style shifted to ES module compatibility:// Old (v6) const storage = require('@google-cloud/storage')(); // New (v7) const {Storage} = require('@google-cloud/storage'); const storage = new Storage({ /* config */ });Removal of URL Support in
bucket.upload
Direct remote URL uploads viabucket.upload()
are no longer supported[3][5]. Instead, use an HTTP client to fetch data and pipe it into a write stream:const file = bucket.file('image.png'); const writeStream = file.createWriteStream(); request('https://example.com/image.png').pipe(writeStream);Dropped Node.js Versions
Support for Node.js 4.x and 9.x was removed[3]. Upgrade to Node.js 10+.
Additional Changes
- Firestore Dependency Update
The underlying@google-cloud/firestore
package was upgraded to v7[5], which may affect projects using both libraries.For detailed migration guidance, refer to the official changelog[3].
Citations:
- 1: https://cloud.google.com/storage/docs/release-notes
- 2: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/breaking-changes-7.0.html
- 3: https://github.com/googleapis/nodejs-storage/blob/main/CHANGELOG.md
- 4: https://developers.google.com/display-video/api/structured-data-file/v7-migration-guide
- 5: https://firebase.google.com/support/release-notes/admin/node
- 6: https://techdocs.broadcom.com/us/en/vmware-tanzu/platform/tanzu-platform-for-cloud-foundry/6-0/tpcf/breaking-changes.html
- 7: https://www.googlecloudcommunity.com/gc/Infrastructure-Compute-Storage/GCP-Storage-bucket-with-Lifecycle-policy-change-cost-estimate/m-p/687521
- 8: https://firebase.google.com/support/releases
- 9: https://cloud.google.com/release-notes
- 10: https://help.cloudstoragesec.com/release-notes/older
@google-cloud/storage v7 Breaking Changes Confirmed
- The major update from v6 to v7 requires a switch in import and instantiation:
- Instead of using a default function call (i.e.
const storage = require('@google-cloud/storage')()
), v7 requires using the named export with thenew Storage()
constructor.- The update removes direct URL support for
bucket.upload()
, meaning uploads from remote URLs now require you to pipe data into a write stream.- Compatibility changes include dropping support for older Node.js versions (Node.js 4.x and 9.x), so ensure your runtime is Node.js 10+.
The tightened peer dependency (^7.12.0) versus the devDependency (^7.15.0) does not constitute a breaking issue regarding these changes. However, consider aligning versions for consistency.
This PR is not complete yet, it misses unit tests (the code is tested), readme updates and changeset. Despite all that, I would like to ask you to review my approach first so I won't write needless tests. I have documented the process in detail, but feel free to ask questions.
Summary by CodeRabbit
New Features
Chores
Tests