No Data Corruption & Data Integrity in Shared Hosting
The integrity of the data that you upload to your new shared hosting account shall be ensured by the ZFS file system which we work with on our cloud platform. Most of the internet hosting suppliers, including our firm, use multiple HDDs to store content and because the drives work in a RAID, the exact same info is synchronized between the drives all of the time. When a file on a drive gets damaged for reasons unknown, however, it's very likely that it will be reproduced on the other drives since other file systems do not have special checks for this. Unlike them, ZFS works with a digital fingerprint, or a checksum, for each and every file. If a file gets corrupted, its checksum won't match what ZFS has as a record for it, so the damaged copy shall be replaced with a good one from another hard disk drive. Since this happens in real time, there's no risk for any of your files to ever be corrupted.
No Data Corruption & Data Integrity in Semi-dedicated Hosting
We have avoided any chance of files getting damaged silently as the servers where your semi-dedicated hosting account will be created take advantage of a powerful file system called ZFS. Its advantage over other file systems is that it uses a unique checksum for every single file - a digital fingerprint that is checked in real time. Since we save all content on multiple NVMe drives, ZFS checks if the fingerprint of a file on one drive corresponds to the one on the rest of the drives and the one it has saved. If there is a mismatch, the corrupted copy is replaced with a good one from one of the other drives and considering that this happens right away, there's no chance that a damaged copy can remain on our web hosting servers or that it could be duplicated to the other drives in the RAID. None of the other file systems employ such checks and furthermore, even during a file system check right after an unexpected power failure, none of them can detect silently corrupted files. In contrast, ZFS won't crash after a power loss and the constant checksum monitoring makes a lenghty file system check obsolete.