CircleCI

All Systems Operational

Docker Jobs Operational
90 days ago
99.74 % uptime
Today
Machine Jobs Operational
90 days ago
99.67 % uptime
Today
macOS Jobs Operational
90 days ago
99.71 % uptime
Today
Windows Jobs Operational
90 days ago
99.74 % uptime
Today
Pipelines & Workflows Operational
90 days ago
99.8 % uptime
Today
CircleCI API Operational
90 days ago
100.0 % uptime
Today
CircleCI UI Operational
90 days ago
99.98 % uptime
Today
Artifacts Operational
90 days ago
99.8 % uptime
Today
Runner Operational
90 days ago
100.0 % uptime
Today
CircleCI Webhooks Operational
90 days ago
100.0 % uptime
Today
CircleCI Insights Operational
90 days ago
100.0 % uptime
Today
CircleCI Releases Operational
90 days ago
100.0 % uptime
Today
Notifications & Status Updates Operational
Billing & Account Operational
CircleCI Dependencies Operational
90 days ago
100.0 % uptime
Today
AWS Operational
Google Cloud Platform Google Cloud DNS Operational
Google Cloud Platform Google Cloud Networking Operational
Google Cloud Platform Google Cloud Storage Operational
Google Cloud Platform Google Compute Engine Operational
mailgun API Operational
mailgun Outbound Delivery Operational
mailgun SMTP Operational
OpenAI Operational
90 days ago
100.0 % uptime
Today
Upstream Services Operational
Atlassian Bitbucket API Operational
Atlassian Bitbucket Source downloads Operational
Atlassian Bitbucket SSH Operational
Atlassian Bitbucket Webhooks Operational
Docker Hub Operational
GitHub Git Operations Operational
GitHub Pull Requests Operational
GitHub API Requests Operational
GitHub Packages Operational
GitHub Webhooks Operational
GitLab Operational
Docker Registry Operational
Docker Authentication Operational
Anthropic api.anthropic.com Operational
packagecloud.io Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Dec 8, 2025

No incidents reported today.

Dec 7, 2025

No incidents reported.

Dec 6, 2025

No incidents reported.

Dec 5, 2025
Resolved - The incident has been resolved. Thank you for your patience. The "infra-fail" should no longer be occurring.
Dec 5, 00:15 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Dec 5, 00:06 UTC
Identified - We are slowly returning to normal. If you had jobs that failed with "infra-fail", those jobs can be rerun. We thank you for your patience while our engineers worked to get our system back to stability.
Dec 4, 23:36 UTC
Investigating - We're currently investigating a high number of infra failures. We'll update as soon as we know more details.
Dec 4, 22:53 UTC
Dec 4, 2025
Resolved - This incident has been resolved.
Dec 4, 17:49 UTC
Monitoring - All jobs should now be running normally. The backlog of jobs has cleared and jobs should be running in typical time. We will continue monitoring to ensure consistent service. Thank you for your patience.
Dec 4, 17:43 UTC
Update - We are continuing to see recovery, but some customers may still experience delays in jobs running on M4 executors. We appreciate your patience whilst we work through the backlog of jobs.
Dec 4, 17:19 UTC
Update - We have mitigated the issue and MacOS jobs which do not have a resource class specified should run normally. Customers can rerun jobs which failed an they should now run normally without any modification to their config. Thank you for your patience whillst we resolved this.

Customers using M4 machine jobs may experience minor delays in jobs starting during this period as there are a large number of re-runs occurring.

Dec 4, 16:29 UTC
Identified - We have identified an issue in our system which mean that MacOS jobs which don't specify a resource class will fail to run with an "Invalid resource class" error. We are working to correct this, but customers can workaround this by specifying a resource class in their config.yml.
Dec 4, 16:04 UTC
Dec 3, 2025
Resolved - We have resolved the issues affecting job triggering, workflow starts, and API queries. Our systems have been stabilized and are operating normally.

What was impacted: Job triggering, workflow starts, API queries, and pipeline page loading experienced disruptions for some customers. This affected all resource classes and executors.

Resolution: We implemented mitigation measures to address high volume workflow queries impacting our internal systems and increased system capacity. All new jobs and workflows are now starting normally, pipeline pages are loading, and API queries are functioning as expected.

What to expect: If you have jobs that became stuck during this incident, please rerun them. If you continue to experience issues after rerunning, please contact our support team. Some customers may still see jobs stuck in a cancelling state. Engineering is aware and addressing to mitigate risk.

We will continue monitoring our systems and conducting a thorough review to identify additional preventive measures.

Dec 3, 23:48 UTC
Update - We have deployed changes to mitigate the high volume of workflow queries impacting our systems. Pipeline pages that were previously failing to load are now loading successfully, and we are seeing significant reduction in API errors.

What's impacted: Some customers continue to experience jobs stuck in a not-running state from earlier in the incident. New job triggering and workflow starts are now functioning normally.

What's happening: We have implemented mitigation measures and increased system capacity. We are continuing to investigate the remaining stuck jobs for affected customers.

What to expect: If you experienced issues loading pipeline pages or querying workflow data via the API, these should now be resolved. New jobs and workflows should trigger normally. If you have jobs that appeared stuck earlier, please try rerunning them while we continue to investigate the reports of jobs that do remain stuck for a small number of customers. The data for those workflows should be available and queryable.

Next update: We will provide an update within 30 minutes. Thank you for your patience while our engineers work through this incident.

Dec 3, 22:59 UTC
Update - We are currently experiencing issues affecting job triggering and workflow starts across all resource classes. Jobs may appear stuck in a not-running state, and some customers may encounter 500 errors when making API calls to check job or workflow status.

What's impacted: Job triggering, workflow starts, and API queries for job and workflow status are experiencing disruptions. This affects all resource classes and executors. Some users may also experience issues loading the pipeline page.

What to expect: We are actively working to stabilize our systems and restore normal operations. We will provide updates as we make progress toward resolution.

We thank you for your patience while we work through these issues - we will update with our progress within 30 minutes or earlier.

Dec 3, 21:58 UTC
Investigating - We are currently investigating reports of jobs not starting. We apologize for the inconvenience.
Dec 3, 21:28 UTC
Resolved - The issues affecting the pipelines page display are related to a broader incident impacting our systems. We have opened a separate incident tracking job triggering and API status issues, which encompasses the pipelines page loading problems. Please follow https://status.circleci.com/incidents/jq4bgq2sjt1r for ongoing updates.
Dec 3, 21:58 UTC
Update - We are continuing to investigate this issue.
Dec 3, 21:31 UTC
Investigating - We are seeing some issues loading the pipelines page. This doesn't affect all users but affects the display of pipelines. We are continuing to investigate the cause of this issue.
Dec 3, 21:31 UTC
Resolved - Between 16:20 and 16:32 UTC, job triggering and workflow starts experienced disruptions across all resource classes due to memory pressure on our internal job distributor systems. We identified the issue and scaled our infrastructure to handle the load. Services returned to normal operation at 16:32 UTC.

What was impacted: Job triggering and workflow starts were disrupted for 12 minutes. Some workflows and jobs appeared stuck in a running state during this window.

Resolution: Our systems are now operating normally with additional capacity in place to prevent similar disruptions. If you had workflows or jobs that were stuck during this window, please manually rerun them.

The incident is now resolved and we will be conducting a thorough review to understand what triggered the memory pressure and identify any additional preventive measures

Dec 3, 18:12 UTC
Monitoring - As of 16:32 UTC, job triggering and workflow starts have returned to normal operation across all resource classes. The impact was limited to a 12-minute window between 16:20 and 16:32 UTC.

What's impacted: All new jobs and workflows are now starting normally.

What to expect: If you have workflows or jobs that were stuck during the 16:20-16:32 UTC window, please manually rerun them.

We are continuing to investigate the root cause of this disruption and will provide an update within 30 minutes or once our investigation is complete.

Dec 3, 17:27 UTC
Investigating - At 16:20 UTC, we began experiencing delays in job triggering and starts across all resource classes. Some workflows and jobs may appear stuck in a running state.

What’s impacted: Job triggering is experiencing delays or is stuck. This affects all resource classes and executors.

What to expect: If you have workflows that appear stuck and haven’t started, we recommend manually rerunning them.

We are actively investigating the root cause and working to restore normal processing speeds. Next update: We will provide an update within 30 minutes or earlier with our progress

Dec 3, 16:53 UTC
Resolved - This incident has been resolved. Things should be back to normal.
Dec 3, 15:30 UTC
Identified - We are seeing some issues loading the pipelines page. This is intermittent and won't affect most users. No work is being affected, just the display of pipelines. We have identified the issue and are working on a fix.
Dec 3, 13:37 UTC
Dec 2, 2025
Resolved - Between Dec 1, 2025 20:55 UTC and Dec 3, 2025 14:27 UTC, a change deployed to our workflow execution system caused duplicate notifications to be sent to some customers and triggered unexpected auto-reruns for a small number of projects.

Impact:
- Some customers received multiple failure notification emails for the same workflow.
- The number of duplicate notifications varied based on how many jobs in the workflow were affected (e.g., marked skipped or cancelled).
- A subset of customers may have received duplicate Slack notifications.
- A small number of projects experienced duplicated auto-reruns.

The issue has been fully mitigated, and notification behavior has returned to normal. We appreciate your patience while our team resolved this issue.

Dec 2, 02:00 UTC
Dec 1, 2025

No incidents reported.

Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.

Nov 28, 2025

No incidents reported.

Nov 27, 2025

No incidents reported.

Nov 26, 2025
Resolved - We are in the process of deprecating our M1 fleet due to limited machine capacity. So while all jobs will run, some running on M1 Large may be delayed. Customers may move to M1 Medium if you must have an M1 chip. Otherwise we encourage customers to move to M2 and M4 resource classes. Thank you for patience and we apologize for any inconvenience.
Nov 26, 23:24 UTC
Identified - We are seeing an increase in wait times to start jobs on Mac M1 Large. Customers are encouraged to move workloads to Mac M4 if possible.
Nov 26, 22:53 UTC
Resolved - All Machine resource classes are operating successfully. Thank you for your patience, and we apologize for any inconvenience this may have caused.
Nov 26, 16:28 UTC
Monitoring - We've identified the issue, and have rolled out a fix. We are now monitoring. To our affected customers, thank you for your patience as we got this resolved.
Nov 26, 16:06 UTC
Investigating - We are currently experiencing issues with certain resource classes for Machine Executors.
Nov 26, 15:50 UTC
Nov 25, 2025
Resolved - There was a short issue from 16:10 UTC to 16:13 UTC which meant that we did not successfully trigger some jobs. If you expected a job to run which did not within the last 30 minutes you should attempt to run the job manually or attempt to trigger the job again. (For example, by pushing to your repository).
Nov 25, 16:10 UTC
Nov 24, 2025

No incidents reported.