Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flush all redis instances after workflow #225

Open
jcscottiii opened this issue May 9, 2024 · 0 comments
Open

Flush all redis instances after workflow #225

jcscottiii opened this issue May 9, 2024 · 0 comments
Labels
enhancement New feature or request go Pull requests that update Go code

Comments

@jcscottiii
Copy link
Collaborator

We should have a process that will proactively flush the redis instances when a workflow finishes.

jcscottiii added a commit that referenced this issue May 9, 2024
This change modifies terraform to deploy redis.
Given redis is a regional service and we are multi-region, we leverage the existing
region-to-subnet map to know which regions to create redis in.

In the meantime, we can have a quick TTL for redis.

An issue to follow up on this in #225 has been filed.

We can probably increase the TTL for prod soon. But while data is still changing before launch, we
can keep it quick. Open to suggestions for the initial TTL.

About the terraform changes itself:
- Because the project spans multiple projects, we have a shared VPC. This is because
  Memorystore is different from other services like spanner. Spanner instances do
  not live in our IP space. But Memorystore instances do. As a result, we have to set up
  "Private Services Access" by giving it a slice of IPs for the service to deploy the service
  into. More here in the blue banner at the top of [1] and [2].
- The TTL for prod and staging are set to 5minutes. We can increase this in the future as we
  get closer to launch.

Other changes:
- Tell terraform to enable needed APIs.

[1] https://cloud.google.com/memorystore/docs/redis/networking
[2] https://cloud.google.com/memorystore/docs/redis/networking#private_services_access

Change-Id: I47e8d7f5c8eee7f0e0ab90a2f8fe6c88bc79933e
dlaliberte pushed a commit that referenced this issue May 9, 2024
This change modifies terraform to deploy redis.
Given redis is a regional service and we are multi-region, we leverage the existing
region-to-subnet map to know which regions to create redis in.

In the meantime, we can have a quick TTL for redis.

An issue to follow up on this in #225 has been filed.

We can probably increase the TTL for prod soon. But while data is still changing before launch, we
can keep it quick. Open to suggestions for the initial TTL.

About the terraform changes itself:
- Because the project spans multiple projects, we have a shared VPC. This is because
  Memorystore is different from other services like spanner. Spanner instances do
  not live in our IP space. But Memorystore instances do. As a result, we have to set up
  "Private Services Access" by giving it a slice of IPs for the service to deploy the service
  into. More here in the blue banner at the top of [1] and [2].
- The TTL for prod and staging are set to 5minutes. We can increase this in the future as we
  get closer to launch.

Other changes:
- Tell terraform to enable needed APIs.

[1] https://cloud.google.com/memorystore/docs/redis/networking
[2] https://cloud.google.com/memorystore/docs/redis/networking#private_services_access

Change-Id: I47e8d7f5c8eee7f0e0ab90a2f8fe6c88bc79933e
@jcscottiii jcscottiii added the enhancement New feature or request label May 13, 2024
@jcscottiii jcscottiii added the go Pull requests that update Go code label May 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request go Pull requests that update Go code
Projects
Status: No status
Development

No branches or pull requests

1 participant