Problem
We need to reliably and concurrently sync tasks and comments to and from GitHub in a non-blocking way.
By providing a timestamp-based concurrency control system we can use a known algorithm to make our GitHub integration more robust.
More importantly, we will be able to unblock our other objectives. We cannot proceed with onboarding projects or volunteers unless GitHub sync is stable, since our overall strategy depends on us connecting volunteers to tasks.
Tasks
In scope
Out of scope
Outline
We would have a sync operation for each type of internal record we want to sync. For example:
TaskSyncOperation
CommentSyncOperation
Every sync operation record, regardless of type, would have a:
direction - :inbound | :outbound
github_app_installation_id - the id of the app installation for this sync
github_updated_at - the last updated at timestamp for the resource on GitHub
canceled_by - the id of the SyncOperation that canceled this one
duplicate_of - the id of the SyncOperation that this is a duplicate of
dropped_for - the id of the SyncOperation that this was dropped in favor of
state
:queued - waiting to be processed
:processing - currently being processed; limited to one per instance of the synced record, e.g. comment_id
:completed - successfully synced
:errored - should be paired with a reason for the error
:canceled - another operation supersedes this one, so we should not process it
:dropped - this operation was outdated and was dropped
:duplicate - another operation already existed that matched the timestamp for this one
:disabled - we received the operation but cannot sync it because the repo no longer syncs to a project
Then each type would have type-specific fields, e.g. a CommentSyncOperation would have:
comment_id - the id of our comment record
github_comment_id - the id of our cached record for the external resource
github_comment_external_id - the id of the resource from the external provider (GitHub)
If the event is due to the resource being created, there will not be a conflict. If the resource was created from our own clients, then there is no external GitHub ID yet. The same is true of events coming in from external providers and there is no internal record yet. I'm not yet clear as to whether we should conduct any conflict checking on these event types, but my guess is no. It should likely jump straight to :processing.
When an event comes in from GitHub we should (using a github_comment as our example):
- delegate to the proper sync operation table for the particular resource (in our example this would be
comment_sync_operations)
- check if there are any operations for the
github_comment_external_id where:
- the
github_updated_at is after our operation's last updated timestamp (limit 1)
- if yes, set state to
:dropped and stop processing, set dropped_for to the id of the operation in the limit 1 query
- the
github_updated_at timestamp for the relevant record_ is equal to our operation's last updated timestamp (limit 1)
- if yes, set state to
:duplicate and stop processing, set duplicate_of to the id of the operation in the limit 1
- the
modified_at timestamp for the relevant record_ is after our operation's last updated timestamp
- if yes, set state to
:dropped and stop processing, set dropped_for to the id of the operation in the limit 1 query
- check if there are any :queued operations for the
integration_external_id where:
github_updated_at is before our operation's last updated timestamp
- if yes, set state of those operations to
:canceled and set canceled_by to the id of this event
- check if there is any other
:queued operation or :processing operation for the integration_external_id
- if yes, set state to
:queued
- when
:processing, check again to see if we can proceed, then create or update the comment through the relationship on the record for comment_id
- when
:completed, kick off process to look for next :queued item where the github_updated_at timestamp is the oldest
We would also need within the logic for updating the given record to check whether the record's updated timestamp is after the operation's timestamp. If it is, then we need to bubble the changeset validation error and mark the operation as :dropped per the above.
Some upsides of the approaches above that I wanted to document, in no particular order:
- The tracking above generates some implicit audit trails that will be helpful for debugging.
- Any unique-per-record queued operations can be run in parallel without issue, i.e. we can run operations for
%Comment{id: 1} and %Comment{id: 2} without any conflict.
- We can avoid "thundering herd" problems when the system receives back pressure by having control over precisely how the queue is processed.
- We can use this in conjunction with rate limiting to only process the number of events we have in the queue for the given rate limit and defer further processing until after the rate limit has expired.
Problem
We need to reliably and concurrently sync tasks and comments to and from GitHub in a non-blocking way.
By providing a timestamp-based concurrency control system we can use a known algorithm to make our GitHub integration more robust.
More importantly, we will be able to unblock our other objectives. We cannot proceed with onboarding projects or volunteers unless GitHub sync is stable, since our overall strategy depends on us connecting volunteers to tasks.
Tasks
In scope
TaskSyncOperationmodelCommentSyncOperationmodelTaskSyncOperationwhen issue webhook is receivedTaskSyncOperationwhen pull request webhook is receivedTaskSyncOperationwhen the task is created/updated from the clientCommentSyncOperationwhen issue comment webhook is receivedCommentSyncOperationwhen the comment is created/updated from the clientOut of scope
Outline
We would have a sync operation for each type of internal record we want to sync. For example:
TaskSyncOperationCommentSyncOperationEvery sync operation record, regardless of type, would have a:
direction-:inbound | :outboundgithub_app_installation_id- theidof the app installation for this syncgithub_updated_at- the last updated at timestamp for the resource on GitHubcanceled_by- theidof theSyncOperationthat canceled this oneduplicate_of- theidof theSyncOperationthat this is a duplicate ofdropped_for- theidof theSyncOperationthat this was dropped in favor ofstate:queued- waiting to be processed:processing- currently being processed; limited to one per instance of the synced record, e.g.comment_id:completed- successfully synced:errored- should be paired with a reason for the error:canceled- another operation supersedes this one, so we should not process it:dropped- this operation was outdated and was dropped:duplicate- another operation already existed that matched the timestamp for this one:disabled- we received the operation but cannot sync it because the repo no longer syncs to a projectThen each type would have type-specific fields, e.g. a
CommentSyncOperationwould have:comment_id- theidof ourcommentrecordgithub_comment_id- theidof our cached record for the external resourcegithub_comment_external_id- theidof the resource from the external provider (GitHub)If the event is due to the resource being created, there will not be a conflict. If the resource was created from our own clients, then there is no external GitHub ID yet. The same is true of events coming in from external providers and there is no internal record yet. I'm not yet clear as to whether we should conduct any conflict checking on these event types, but my guess is no. It should likely jump straight to
:processing.When an event comes in from GitHub we should (using a
github_commentas our example):comment_sync_operations)github_comment_external_idwhere:github_updated_atis after our operation's last updated timestamp (limit 1):droppedand stop processing, setdropped_forto theidof the operation in thelimit 1querygithub_updated_attimestamp for the relevantrecord_is equal to our operation's last updated timestamp (limit 1):duplicateand stop processing, setduplicate_ofto theidof the operation in thelimit 1modified_attimestamp for the relevantrecord_is after our operation's last updated timestamp:droppedand stop processing, setdropped_forto theidof the operation in thelimit 1queryintegration_external_idwhere:github_updated_atis before our operation's last updated timestamp:canceledand setcanceled_byto theidof this event:queuedoperation or:processingoperation for theintegration_external_id:queued:processing, check again to see if we can proceed, then create or update thecommentthrough the relationship on the record forcomment_id:completed, kick off process to look for next:queueditem where thegithub_updated_attimestamp is the oldestWe would also need within the logic for updating the given record to check whether the record's updated timestamp is after the operation's timestamp. If it is, then we need to bubble the changeset validation error and mark the operation as
:droppedper the above.Some upsides of the approaches above that I wanted to document, in no particular order:
%Comment{id: 1}and%Comment{id: 2}without any conflict.