Cloud Stack Ninja

Information on the current system

I currently have a Service-A which is the Producer, Service-B is the consumer. Within Service-B:

  1. I receive the data object.
  2. The data object is queued.
  3. The queue is actively polled by a co-routine running in the background.

What is the problem?

As the size of the queue increases, the memory usage of my service (B) increases. I would like to figure out a way to conserve memory.

Possible Solution

  1. Queue size cap/threshold is set to X
  2. Once the Queue.size == X -> Pause the consumption (The producer i.e Service-A offers this back pressure mechanism feature)
  3. The data consumption is paused after the current batch of data objects is received. So the Queue.size would technically end up increasing to X+delta. This is because from the Producer's perspective, even if it receives a pause() request, it only pauses after sending over the current batch of data objects.
  4. Queue polling happens as usual.
  5. Send a resume() request to Producer once the Queue.size decreases.

Questions

  1. What is the broad topic of the problem that I'm trying to solve? Is throttling the correct term?
  2. Am I moving in the right direction in limiting Queue size to approach the problem of memory usage?
  3. If yes, how do I decide when to resume consumption? Should I resume when Queue.size goes down below X/2? X/4? (Where X is the predefined threshold)
  4. I have looked into certain throttling algorithms, however in my case discarding data objects is not an option. Are there any other preexisting algorithms that I should read up on?


Read more here: https://stackoverflow.com/questions/64412824/queue-size-limiting-algorithm-within-a-consumer-system

Content Attribution

This content was originally published by max at Recent Questions - Stack Overflow, and is syndicated here via their RSS feed. You can read the original post over there.

%d bloggers like this: