Skip to content

Conversation

@marcelklehr
Copy link
Member

Checklist

@marcelklehr marcelklehr force-pushed the fix/taskprocessing-cache branch from 3949475 to 49a5212 Compare February 4, 2025 11:54
Copy link
Member

@julien-nc julien-nc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional change.

Co-authored-by: Julien Veyssier <[email protected]>
Signed-off-by: Marcel Klehr <[email protected]>
@marcelklehr marcelklehr marked this pull request as ready for review February 4, 2025 12:05
@marcelklehr marcelklehr enabled auto-merge February 4, 2025 12:30
@marcelklehr marcelklehr merged commit 1da3c25 into master Feb 4, 2025
191 of 193 checks passed
@marcelklehr marcelklehr deleted the fix/taskprocessing-cache branch February 4, 2025 12:35
@marcelklehr
Copy link
Member Author

/backport to stable31

@marcelklehr
Copy link
Member Author

/backport to stable30


$this->availableTaskTypes = $availableTaskTypes;
$this->cache->set('available_task_types', $this->availableTaskTypes, 60);
$this->distributedCache->set('available_task_types_v2', serialize($this->availableTaskTypes), 60);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @marcelklehr, I'm curious why a distributed cache is preferred here? Doesn't this increase the latency of the request? Is there a need to synchronize the cached value across all application servers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey

Is there a need to synchronize the cached value across all application servers?

I think so, yes, they should all have the same value to avoid failing tasks sent from the state of one server and received on a different server. The chance of that happening is slim, though, as the cached value shouldn't change very often.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to adjust the code to handle the case more gracefully? A local cache has a very low latency and scales horizontally. The distributed cache has high latency and creates a bottleneck when scaling out.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, what do you have in mind?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they should all have the same value to avoid failing tasks sent from the state of one server and received on a different server.

handling this case better, and using the local cache again.

@nextcloud-bot nextcloud-bot mentioned this pull request Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants