Samsung has made some tweaks to their garbage collection algorithms and overall the IO consistency gets a nice bump over the 840 Pro but still, this is something we've already seen with SATA 6Gbps SSDs. The graphs above and below illustrate this as the XP941 isn't really any faster than the SATA 6Gbps based 840 Pro. Ultimately the NAND performance is the bottleneck, so without faster NAND we aren't going to see any major increases in steady-state performance. The interface has never been the bottleneck when it comes to random write performance, especially in steady-state. Click the buttons below each graph to switch the source data.įor more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. The first one is of the whole duration of the test in log scale. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.Įach of the three graphs has its own purpose. We are also testing drives with added over-provisioning by limiting the LBA range. The test is run for just over half an hour and we record instantaneous IOPS every second.
Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. Just guessing though.Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. It could be that Apple just does not want to test every drive on the market, or a large enough set of drives to have some statistical confidence that their version of trim support works well. Is trim one of those things that once enabled if your drive is not compatible you will immediately know? If it is like NTFS write support, and can potentially really mess up data then I'm not sure enabling it haphazardly is such a good idea.
If enabling trim on any SSD is such a no-brainer, why does Mac OS X not automatically enable it when any SSD is detected? This is still unclear to me. My understanding is limited and I do not necessarily disagree with the folks who say it is essential to enable.Īpple has products on the market with end user replaceable parts such as my 2009 MacBook Pro. It could be that I do not have a workflow that has adequate write/re-write cycles on the SSD for it to make any difference.
I haven't had issues with my non-trim-enabled SSDs (on Windows 7 for 1.5 years, and Mac OS X for about 3 years). Here is the explanation snaphat gave that I found insightful in another thread: If there has been cases of data corruption issues with SSD's then chances are it would have been the older first/second generation SSDs when TRIM was introduced later in firmware updates. It's not critical but you should have it on otherwise you get microstutter issues because the system/controller needs wait for the cell to be cleared before data can be written to it again. With TRIM: File is deleted and the controller deletes the data so the cell is cleared to be written when next needed to. Without TRIM: File deleted but data remains > New file/data about to be written > Controller sees the Cell hasn't been cleared and will clear it before writing
'Clear' the cells that have data removed so when data is to be written to the same cells later on, the controller doesn't need to clear the cells. What happens is when you delete a file, a command is sent to the controller to 'TRIM' i.e. It's not 'critical', but it's recommended to have it on. I heard a few people have had issues with Trim Enabler also. I do not like doing tweaks and things like that when it is not necessary. How necessary is that though? Like I said, I am using my Crucial M4 without trim and there have been no performance drops for over a year since I purchased it.