Description
Operating System
Custom Alpine Linux v3.14 (kernel 6.1.50)
Browser Version
Node.js v18.18.2
Firebase SDK Version
10.5.2
Firebase SDK Product:
Firestore
Describe your project's tooling
Custom Webpacked TypeScript targeting es2020.
Node v18.18.2
@firebase/firestore@4.3.2
@grpc/grpc-js@1.9.9
firebase@10.5.2
typescript@5.2.2
webpack@5.89.0
Describe the problem
GRPC connections appear to be left open after the Firestore connection is closed. Either via explicit termination, or through general network disruption. Resulting in a very slow memory leak on production systems with an unstable internet connection.
2023-10-28 09:35:31 [Heartbeat] external: 20759300, heapTotal: 41402368, RSS: 120614912 2023-10-29 09:35:31 [Heartbeat] external: 20046998, heapTotal: 43499520, RSS: 121221120 2023-10-30 09:35:31 [Heartbeat] external: 19881169, heapTotal: 50839552, RSS: 129482752 2023-10-31 09:35:31 [Heartbeat] external: 20962083, heapTotal: 70762496, RSS: 150278144 2023-11-01 09:35:31 [Heartbeat] external: 20230416, heapTotal: 91996160, RSS: 171188224 2023-11-02 09:35:31 [Heartbeat] external: 26917000, heapTotal: 112181248, RSS: 200593408
Using a script with our application to establish a connection to Firestore, wait a couple of seconds, tear it down, and repeat. Then capturing a before/after with heap snapshot:
Shows a large number of ClientHttp2Session for "dns:firestore.googleapis.com"
and subchannelObjArray in GRPC's getOrCreateSubchannel shows an ever growing collection of subchannels for the same IP + port number. (These subchannels aren't being reused as their SSL context differs)
I believe the GRPC stream needs to be explicitly closed. A quick hack would be:
diff --git a/packages/firestore/src/platform/node/grpc_connection.ts b/packages/firestore/src/platform/node/grpc_connection.ts index 77879c280..c91f7ee28 100644 --- a/packages/firestore/src/platform/node/grpc_connection.ts+++ b/packages/firestore/src/platform/node/grpc_connection.ts@@ -247,6 +247,11 @@ export class GrpcConnection implements Connection { closed = true; stream.callOnClose(err); grpcStream.end(); ++ if (this.cachedStub) {+ this.cachedStub.close();+ this.cachedStub = undefined;+ } } };
Steps and code to reproduce issue
Sample program which gets a Firestore instance, queries a collection, terminates Firestore and repeats that three times. Printing out the GRPC sub channel pool on termination completion.
import*asfirestorefrom'firebase/firestore';import{deleteApp,initializeApp}from'firebase/app';import{getSubchannelPool}from'@grpc/grpc-js/build/src/subchannel-pool';(async()=>{constfireBaseApp=initializeApp({projectId: 'your-project-id',apiKey: 'an-api-key',});for(lettimes=0;times<3;++times){conststore=firestore.getFirestore(fireBaseApp);try{awaitfirestore.getDocs(firestore.collection(store,'test'));}catch(e){console.error(e);}awaitfirestore.terminate(store);awaitfirestore.clearIndexedDbPersistence(store);console.log(getSubchannelPool(true));}deleteApp(fireBaseApp);})();
npx ts-node -- test.ts
After the last iteration the Subchannel pool contains three entries:
SubchannelPool { pool: [Object: null prototype] { 'dns:firestore.googleapis.com': [ [Object], [Object], [Object] ] },
After patching the close callback in openStream in grpc_connection.ts:
SubchannelPool { pool: [Object: null prototype] { 'dns:firestore.googleapis.com': [] }, cleanupTimer: null }
pool is now empty, although GRPC purposefully doesn't remove empty keys (there's a TODO for that in GRPC)
Our application does offer an option to end users to disable cloud connectivity, which will explicitly terminate the Firestore connection. But we believe that this can happen with general network disruptions or any other case where the GRPC wrapper's close is invoked (which leaves the underlying GRPC connection lingering).