Skip to main content

We know that your EHR is mission-critical, which is why we and our service partners work hard to make sure Cerbo is running smoothly. When unavoidable performance issues or outages occur, we’ll keep this page updated in real time with status updates and resolution information.

All times below are in Pacific Time.

August 29, 2023 5:30PM Pacific - Extreme Weather near data center - RESOLVED

August 30, 2023 8PM Pacific - All hurricane watches and warnings for Central Florida have ended. The Orlando data center is standing down from high alert, and business as usual has resumed.
The hurricane had no significant impact to our Orlando datacenter. A thorough review was conducted, and the facility experienced no damage from the storm. Moreover, all electrical systems remained stable throughout, with no notable electrical events.

August 30, 2023 10AM Pacific - Our data center is not in the path of the hurricane, but even so a number of steps have been taken as a precaution:
1. The datacenter itself underwent a test of their backup generator system today (refueling + replacement/testing of some generator starter motors)
2. We spoke with datacenter management today and confirmed that there will be additional routine checks on physical equipment during the duration of the storm
3. We confirmed that both primary ISP backbones that serve the datacenter are live so that traffic can be moved onto a secondary ISP in the event of an ISP failure
4. We migrated all demographic and scheduling data to a backup datacenter so that in the event of an outage we will be able to provide access to your schedules as well as contact details for each patient.
For storms that pose a greater likelihood of impacting our services we also migrate "hot" backups to a secondary data-center in Dallas (we did this during Ian last year) so they can be rapidly spun up, but our data center has stressed that they expect very little impact from this storm and aren't advising such action this round (they've ranked the disruption potential of this storm as "low", which has not always been the case for previous hurricanes). That said, we'll remain in contact with our hosting company over the next 24 hours and have other contingency plans in the event of a prolonged outage depending on the nature of the disruption.

August 29, 2023 5:30PM Pacific - We are aware of extreme weather near one of our data centers in Florida, and we are continually monitoring our systems to ensure we deliver the services and uptime our customers expect.

August 17th, 11:44am Pacific - Connectivy Reports - Resolved

4:30PM Pacific: Our hosting provider resolved the issue, and our systems are functioning as intended.

3:18PM Pacific: Host has moved the impacted firewall to new hardware. We are currently testing to ensure everything is resolved.
3:08PM Pacific: Our hosting provider is still containing the issue. They have a plan to migrate our infrastructure if the outage continues. We'll keep you updated as we find out more information.
1:20PM Pacific: Our host's networking team is still investigating load on the impacted firewall. In the meantime connectivity for clients behind the impacted firewall (FW01) continues to have intermittent periods where traffic cannot reach our servers. The outages tend to be short (less than two minutes). We understand how disruptive this has been and are in regular contact with our host.
12:27PM Pacific: The issue is still under investigation. It appears impacted clients are all behind the same firewall and has been escalated to out host's networking team.
11:44AM Pacific: Over the last few minutes we've seen some sporadic connectivity issues with our primary data-center and we're reaching out to our host for more details.

August 2nd, ~12:00pm-1:07pm Pacific - eRx Outage - Resolved

Outbound eRx traffic is currently being impacted by an issue with SureScripts, the national eRx network. We're in contact with them to get details and an ETA for resolution. Refill traffic appears to be flowing normally.

July 26th, ~5AM PT - Resolved - Networking Incident impacting some Canadian Clients

We received alerts that some clinics hosted in our Canadian datacenter were offline. We reached out to our host who confirmed that their Toronto datacenter is experiencing a networking issue that is currently being worked on. We'll continue to follow up with the host for regular updates.
Update 8:35AM Pacific - Our host confirmed that the issue is with the primary edge-router at the Toronto data center. Engineers are attempting to repair it.
Update 8:47AM Pacific - Networking is back online for all Toronto-hosted EHRs. We're waiting for a formal debrief but everything appears to be working correctly.

July 20th, 4:10pm-5:00pm Pacific - Outage affecting less than 10% of builds

A firewall crashed at 4:10pm Pacific time, causing less than 10% of builds to be inaccessible until the firewall could be successfully restarted. We are working with our server company to better guard against a similar incident in the future.

June 15th, 10:05AM-12:51PM Pacific - Fullscript Slowness Reports - Resolved

Fullscript, one of our supplement integration providers, is currently experiencing technical difficulties, resulting in its temporary unavailability.
The issue with the interface between Cerbo and Fullscript, and has been resolved. Please reach out to if you continue to encounter problems.

June 13th, 10:04AM-10:34AM Pacific - Reported Slowness in some builds - Resolved

Receiving reports of servers encounter high load but not down. Charts in affected builds are taking 5-10 seconds to load. We are working on the issue and expect the slowness to start to resolve in ~30 min

May 11th, 10:30AM Pacific - Surescripts Intermittent Connectivity Issues - Resolved

SureScripts (the national e-prescribing network) announced that they were having intermittent connectivity issues nation-wide. As of 11:10AM they sent a follow-up confirming that issues had been resolved.

May 4th, 11:00AM - 12:00PM Pacific - Connectivy Issues Reported - Resolved

Five clients hosted on one physical node in our network were impacted by an event at our data-center. The issue was escalated to our host who was able to restore connectivity within an hour of initial reports. We're working with our host to migrate all clients off the impacted node within the next week to ensure we don't have further stability issues.

March 13, 2023 7:50AM Pacific - Server outage affecting some clients - Fully resolved by 12:50PM Pacific

A CPU spike on one of our physical nodes caused networking services for some of our client builds to be inaccessible via the network, causing connection issues for ~8% of our user-base. As of 8am Pacific we had restored connectivity for some clients and by 9:30AM Pacific the majority of clients who had contacted us were able to resume normal operations. A smaller number of clients required additional escalation to get full networking restored, and between 10AM and 1PM Pacific all remaining clients were brought back online.
We're still performing a post-event analysis, but what we know so far is that all of the impacted customers were hosted on a specific physical server rack which experienced a sudden CPU spike which disabled some networking services. Restoring connectivity required reconnecting each client's networking services on a per-customer basis (performing a graceful reset of some services) but for reasons we are still investigating, around 20 clients required more extensive intervention.

We've also isolated the source of the original CPU spike and moved it to a dedicated environment pending further investigation.

March 6, 2023 8:41AM Pacific - Quest's network "communication failures" - Resolved

Immediate resolution. Quest's interface network appears to be having issues this AM. We've reached out to their interface team but you might experience messages reporting "communication failures" until they've resolved the issue.

To view prior status updates click here to view the Status Archive.

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.