CircleCI
All Systems Operational
Docker Jobs ? Operational
90 days ago
99.77 % uptime
Today
Machine Jobs ? Operational
90 days ago
99.77 % uptime
Today
macOS Jobs ? Operational
90 days ago
99.76 % uptime
Today
Windows Jobs ? Operational
90 days ago
99.77 % uptime
Today
Pipelines & Workflows Operational
90 days ago
99.98 % uptime
Today
CircleCI UI Operational
90 days ago
99.85 % uptime
Today
Artifacts ? Operational
90 days ago
100.0 % uptime
Today
Runner ? Operational
90 days ago
99.77 % uptime
Today
CircleCI Webhooks ? Operational
90 days ago
99.99 % uptime
Today
CircleCI Insights ? Operational
90 days ago
100.0 % uptime
Today
Notifications & Status Updates ? Operational
Billing & Account ? Operational
CircleCI Dependencies ? Operational
AWS ? Operational
Google Cloud Platform Google Cloud DNS Operational
Google Cloud Platform Google Cloud Networking ? Operational
Google Cloud Platform Google Cloud Storage Operational
Google Cloud Platform Google Compute Engine Operational
mailgun API Operational
mailgun Outbound Delivery Operational
mailgun SMTP Operational
Upstream Services ? Operational
Atlassian Bitbucket API Operational
Atlassian Bitbucket Source downloads Operational
Atlassian Bitbucket SSH Operational
Atlassian Bitbucket Webhooks Operational
Docker Hub ? Operational
GitHub Operational
GitHub API Requests Operational
GitHub Packages ? Operational
GitHub Webhooks Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Mar 28, 2023

No incidents reported today.

Mar 27, 2023

No incidents reported.

Mar 26, 2023

No incidents reported.

Mar 25, 2023

No incidents reported.

Mar 24, 2023

No incidents reported.

Mar 23, 2023
Resolved - This incident has been resolved.
Mar 23, 19:15 UTC
Update - Queue times are continuing to go down. We have implemented a fix and continuing to monitoring.
Mar 23, 18:51 UTC
Monitoring - Queue times continue to go down. We have implemented a fix and are moving the incident over to Monitoring.
Mar 23, 18:29 UTC
Update - We are seeing improvement in the queue times - the wait time has reduced to roughly 20 minutes and trending downwards.
Mar 23, 18:18 UTC
Update - Queue times have grown to about 30 minutes. We are continuing to work on reducing the number of queued jobs.
Mar 23, 18:01 UTC
Update - We have an understanding of the issue and we are continuing to work on a fix. For now, there are still delays on Linux machine jobs and Remote Docker jobs.
Mar 23, 17:39 UTC
Identified - We are seeing longer queue times for some jobs. Users may see a 10-15 minute delay for Linux machine jobs and remote docker jobs
Mar 23, 17:17 UTC
Resolved - The issue has been resolved.

It should be noted that customers that attempted to trigger a build during the outage may need to navigate to the webhook settings in the GitLab UI, then reactivate the webhook before being able to trigger a build again. If you continue to see this issue after reactivating your webhook, you must remove it completely and recreate it.

Mar 23, 16:27 UTC
Monitoring - We have identified the issue and made the necessary changes to address the behavior so GitLab pipelines should be executing normally. It is possible that customers that attempted to trigger a build during the outage will need to navigate to the webhook settings in the GitLab UI, then reactivate it before being able to trigger a build again. If you continue to see this issue after reactivating your webhook, you must remove it completely and recreate it.
Mar 23, 16:15 UTC
Investigating - CircleCI is currently seeing issues with GitLab webhooks causing builds from GitLab to not trigger. We are investigating the cause of this issue.
Mar 23, 15:48 UTC
Mar 22, 2023

No incidents reported.

Mar 21, 2023

No incidents reported.

Mar 20, 2023
Resolved - This incident has been resolved.
Mar 20, 16:45 UTC
Monitoring - We are seeing Docker job start times recovering. We continue to monitor the situation
Mar 20, 16:27 UTC
Update - We are seeing Docker job start times recovering. We continue to monitor the situation
Mar 20, 16:24 UTC
Identified - We are seeing a delay in the execution of Docker jobs. Customers would see delays in the "Preparing Environment" steps.

We have identified the cause and are working towards resolving the issue.

Mar 20, 16:12 UTC
Mar 19, 2023

No incidents reported.

Mar 18, 2023

No incidents reported.

Mar 17, 2023

No incidents reported.

Mar 16, 2023
Resolved - This incident has been resolved.
Mar 16, 18:02 UTC
Update - We are starting to see recovery across all Docker jobs and adding capacity to speed up recovery.
Mar 16, 17:47 UTC
Update - Delays have recovered for most Docker resource classes.

Docker Jobs using Extra Large Resource Class are still experiencing delays. We are working on further mitigation for these jobs.

Mar 16, 17:27 UTC
Monitoring - We have an encountered an issue with delays in Docker jobs and Remote Docker with DLC jobs. A fix has been implemented and we are monitoring the results.
Mar 16, 16:56 UTC
Mar 15, 2023
Resolved - This incident has been resolved. Please be aware that yarn will be missing from the image until downloads from ghcr.io recover. We appreciate your patience.
Mar 15, 15:00 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 15, 14:52 UTC
Identified - We are seeing an issue with our MacOS images spinning up. The cause has been identified and we are working to resolution.
Mar 15, 14:36 UTC
Postmortem - Read details
Mar 24, 01:40 UTC
Resolved - We are continuing to see usual service levels for GitHub Checks updates. This incident is now resolved.
Mar 15, 13:15 UTC
Monitoring - We made a number of changes to mitigate the impact of the delayed checks. We have seen a downward trend of delayed GitHub Checks, and are now seeing usual activity. We will monitor the current activity.
Mar 15, 11:08 UTC
Update - We are continuing to work on a fix for the delayed GitHub Checks.
Mar 15, 10:55 UTC
Identified - We have identified the cause of the delayed Checks and are actively working on a fix.
Mar 15, 10:27 UTC
Investigating - We are currently observing a delay with GitHub Checks of up to 30 minutes. We are currently working to identify the cause of the issue.
Mar 15, 10:04 UTC
Postmortem - Read details
Mar 15, 17:47 UTC
Resolved - This incident has been resolved. Docker, Mac, Machine and other jobs should be running operational.
Mar 15, 05:14 UTC
Update - Docker, Mac, Machine and other jobs should are running operational. We are continuing monitoring the situation. We appreciate your patience.
Mar 15, 04:18 UTC
Monitoring - We are seeing jobs are becoming operational. We will continue to monitor the situation.
Mar 15, 03:41 UTC
Identified - We are currently seeing issues with jobs not starting. We have identified the issue and we are working to fix this.
Mar 15, 03:19 UTC
Postmortem - Read details
Mar 24, 01:38 UTC
Resolved - This incident has been resolved. Docker, Mac, Machine jobs should be running correctly.
Mar 15, 01:59 UTC
Monitoring - Machine jobs are now running, we expect the backlog to clear over the next hour. We will continue to monitor.
Mar 15, 01:34 UTC
Update - We are seeing Docker and Mac jobs are now becoming operational. We are seeing Machine jobs are starting to run, but we are still monitoring for capacity issues.
Mar 15, 01:27 UTC
Update - We've identified a problem in our internal networking systems that triggered the issue and made changes to configuration and deployments to address the issue.

Docker jobs with contexts are starting to run successfully. We will continue to monitor for further issues with Contexts.
We are still working to add capacity for machine jobs.

Mar 15, 01:03 UTC
Update - UI access remains stable. We are still tuning capacity and resources to process backlogs. We appreciate your patience and understanding.
Mar 15, 00:14 UTC
Update - UI and API access has mostly recovered. We are still working through capacity issues for Machine jobs and Contexts.
Mar 14, 23:39 UTC
Update - We are continuing to add capacity to process the backlog of jobs. We appreciate your patience.
Mar 14, 22:59 UTC
Identified - We are seeing jobs running again, and are adding additional capacity.
Mar 14, 22:28 UTC
Update - We are seeing intermittent successes in the UI as some components have been recovered. We are continuing to work on getting jobs moving and will update shortly with status on jobs.
Mar 14, 22:13 UTC
Update - Our debugging efforts have led to partial recovery of some internal services. We are seeing intermittent success on user actions and continue to work on restoration. We'll report back in under 20 mins as we see the impact.
Mar 14, 21:46 UTC
Update - We are continuing to see degradation on our services including jobs not starting and UI impacts. We are currently investigating networking issues and will update further within 30 mins.
Mar 14, 21:09 UTC
Update - We are continuing to investigate this issue.
Mar 14, 20:46 UTC
Update - We are continuing to investigate this issue.
Mar 14, 20:19 UTC
Update - We are continuing to investigate this issue. Thank you for your patience while we work toward a resolution.
Mar 14, 19:52 UTC
Update - We are continuing to investigate this issue.
Mar 14, 19:20 UTC
Update - We are continuing our investigation. Currently jobs are delayed in starting and not completing.
Mar 14, 18:58 UTC
Investigating - We are seeing a delay in starting jobs. We are currently investigating and will update shortly.
Mar 14, 18:19 UTC
Mar 14, 2023