Trim on SSD Drives
When I first wrote this article/thread, which was for my own benefit and was eventually turned into this thread, as it is today, trim was not available on ssd drives, when it did arrive, there were mixed idea’s on how it worked, a lot that was incorrect at the time, a lot of new users do not understand the use of Trim even today, explanations tend to very technical as trim is a complicated issue at the best of times. I found a simple explanation for the implementation of trim in late 2009, I came across it again the other day, it still holds true today so here’s the
link, ( I know you are going to point me to AnandTech explanations, there are links in this thread, if your a technically minded person, then AnandTech’s articles may be a better option ) this simpler explanation is still active in late 2011. As you can see if you carry on reading and don’t turn off, how complicated it can be for a non-technical user of ssd drives. So here’s my explanation of trim. Remember this a generalization of most ssd drives, it’s a lot easier when your dealing with your own ssd drive.
Trim and it’s association with GC ( garbage collection ) varies depending on how the controller’s "GC" handles the use of Trim ( Win7 ). Trim is activated, by deleting files in the OS ( Win7 ), it doesn’t actually trim the ssd drive, but marks the block/file header’s with a 1 making that block/file available for being re-written or written over. Trimming is actually done by the GC and will only work if the block doesn’t contain other files that haven’t been marked deleted, but this again depends on how GC is implemented, the GC can wait till the block is full, then move undeleted files to another block allowing for the block to be trimmed ( GC ), in doing this, the GC uses a lot more write amplification than it does if Trim in win7 has previously marked these blocks/files for deletion, flash memory can only be ‘written over’ if all the information in that block as been marked with 1, this allows the whole block to be re-written. GC can only erase/Trim a full block, not individual files or pages.
What trim does is mark these blocks/files etc and make it simpler for the inbuilt GC ( garbage collection ) to recognize the blocks that are available for further use, on some controller’s this will not necessarily happen immediately, it depends on whether the controller as been designed for “idle” use, “on the fly” use or “stand by use”, GC/Trim can be brought into use in many different ways, having the computer sitting with the bios open, having the computer idling at log in, placing the computer in stand by mode, deleting files or just simple idling the computer overnight. It’s a matter of finding out how the controller in your particular ssd handles the garbage collection. You will find the most efficient way is by experimentation or by other members passing on there particular way of doing it.
Low level formatting used on
conventional HDD drives writes mainly 0’s to every cell on a SSD drive, it’s the opposite to how flash memory actually works, if you low level format with Win7 or any software that writes 0’s or 0’s and 1’s to the individual cells, you are not necessarily cleaning the ssd drive completely, ( hence the need for secure erase software ), you can actually make the performance worse.
If you use software designed for writing 1’s to each cell, like you would if it was designed for ssd drives, this will “clean” the drive and is a good thing to do if you are selling the drive, or as a last resort, if you are having problems with your ssd drive. The downside of using this type of erasure, is that it not only takes a long time it uses high write amplification and if used regularly can reduce the flash cells life expectancy considerably, these type’s of deletions bypass most controllers durawrite capabilities ( the way controllers extend the life expectancy of the ssd’s individual MLC cells ), Durawrite ( Sandforce ), other controllers have this technology in some form or other, it increases MLC life expectancy by between 5 and up to 30 times depending on the design of the controller.
Basically a command line program like dispart and dispar will secure erase a ssd drive either by writing to the individual ( “Cleanall“ command ) cells or simply marking the blocks to be deleted with a 1 ( “Clean” command ), you need to use the latter which takes only seconds and will return the majority of ssd's to a new state, without impacting too much on write amplification.
That’s how I see the use of GC and Trim in win7 today ( Nov 2011 ).
Note! Most ssd software, tool boxes etc use the inbuilt win7 diskpart commands, to make it easier than using the command line. There is a explanation for the use of diskpart and can be found
HERE, but
NOTE! It’s written with conventional hard drives in mind, not ssd drives.
In the case of most Toolboxes provided by manufacturer’s, the OCZ Toolbox is a typical example, they are incompatible with Intel’s RST driver, you would need to use Diskpart from the command line. Also Toolboxes will in most cases, not secure erase a ssd with a OS on it or if the ssd drive is in use as the OS drive eg. ( “C:/” partition ). If you want to secure erase an OS drive you need to delete all the partitions on the drive including any hidden partitions, there’s an excellent tutorial on the Intel toolbox on LesT’s website,
TheSSDReview here’s the
Link.
You will have to use Diskpart or Diskpar from the Dos prompt, you can’t be in windows with the ssd drive you intend to erase. This is mainly if the toolboxes fail to work and deleting the partitions refuses to solve the problem.
I’m sure there are exceptions to what I have written and easier ways of explaining trim or secure erasing some types of ssd drives. All I ask is you don’t isolate passages out of context, please read the whole article, before you tell me I’m incorrect. There are a lot of more informative people out there than me on this subject, so I’m open to criticism on the subject. I want to impart only the correct facts on this thread.
Trim and the IDE issue The Intel IDE drivers after Vista sp2 are fully compatible with the trim command , but for trim to pass through this command, the ssd controller itself has also got to be compatible with IDE mode, eg. Intel drives with the Intel controllers are ( according to Intel ) fully compatible. The Crucial M4 appears not to be, other controllers optimized for AHCI may also not be compatible. I can’t comment on the Intel 510 as I’ve only ever used them in AHCI mode.
Wearlevelling Here’s an explanation that’s not too complicated, its from
StorageSearch.com, here’s the
LINK.
Overprovisioning Also from
StorageSearch.com, a simple explanation for the need for overprovisioning, same link as above. This actual link covers a number of technologies used by the ssd controller in ssd drives. Overprovisioning improves write performance, if the ssd is used in a high write situation, increasing the overprovisioning will improve performance and write endurance, in a high read situation, too much can hinder performance, in a os situation the 7% supplied on client drives, in most cases is probably adequate depending on it’s use, if there’s a lot of writing done to the drive daily, reducing the partition size, which will increase overprovisioning, by a small amount may improve performance.
http://forums.extremeoverclocking.com/showpost.php?p=3643482&postcount=1
Thursday, October 14, 2010
Todays solid state drives are worlds apart from those of just 3 years ago, however they are not yet perfect. Performance degradation can still be observed through ‘seasoning’ of the SSD as well as filling it to capacity. SSD manufacturers have been successful in combating the effects of seasoning but performance degradation when an SSD is filled to capacity seems to be just a bit more difficult.
.
.
Typical testing of most drives, through use of random data, will result in an observable performance drop which may start as soon as the SSD is filled past the 70% mark. This article will describe the common characteristics of SSDs followed by a simple method to ensure that maximum performance is sustained with the drive.
SEASONING
Much has been said with respect to performance degradation as a result of the ssd becoming ‘seasoned’ over time. By ‘seasoned’, we mean that the drive will eventually use up all of its empty blocks of NAND, or memory and, without TRIM, the process of writing to a drive actually becomes that of reading the block of data, understanding that it is invalid, erasing and then writing rather than simply writing to a clean block. Performance is greater when writing to ‘clean’ memory vice memory which has previously been used and contains invalid data that has not been cleared. The root cause of degradation is that when a non-TRIM ssd is told to delete data, it actually only marks the area as clear which leaves the invalid data intact and tricks the ssd into believing that the NAND flash is available.
Data on a SSD cannot simply be over-written as it is done on a hard drive and this gets a bit more complicated when we erase information and the block that it is located on also contains valid information that we don’t want deleted. The process then becomes read data, recognize the valid information, move it to another clean block, erase the present block and write. Manufacturers have tried to combat this issue of performance degradation by creating 3 solutions to the problem which are wear leveling, TRIM and ITGC (or Garbage Collection).
Wear leveling
Wear leveling is the process of the ssd understanding how many times each cell of memory has been written to and then ensuring that all are all written to evenly. After all, the life span of the ssd is dependent on the total number of writes that are written to and this has been coined as ‘write endurance’. Unlike the hard drive which stores information in a static location, the SSD will move information around on a continuous basis without your knowledge to ensure that all cells wear evenly, thus affording a longer lifespan for the ssd. By also doing this, the drive can ensure that only the valid information is used, leaving blocks to be cleaned up by TRIM or ITGC, again without the knowledge of the user.
ITGC/GC (Idle Time Garbage Collection)
Garbage Collection (GC) is the process by which the SSD recognizes, in idle time, which cells are valid and which are not valid (or deleted) on the drive. It then clears the blocks of the invalid data to maintain the speed of writing to ‘clean’ pages or blocks during normal operation. GC was initially shown to be a last resort if TRIM was not available, however, recent releases are showing new methods to be very aggressive and results equal to that of TRIM are being observed. This is a huge benefit to those using RAID systems where Garbage Collection is accomplished as TRIM is not an option.
The SSD Review was able to discuss GC and TRIM with Crucial as it pertains to their SATA3 releases as it has been observed that their RealSSD C300 SATA3 drives do not appear to show any performance degradation over extended use. Crucial confirmed that they had to consider that TRIM would not pass through the present release of SATA3 drivers which helped recognize that very aggressive GC would be necessary for the C300 SATA 3 SSDs success. The subsequent result was that many forum threads were created by avid users who were questioning whether TRIM was, in fact, working in their SSDs as no performance degradation was seen even in the toughest of test beds. To dispel a common belief, it is not the Marvell processor of the Crucial RealSSD that prevents TRIM from being passed, but rather, that of the hardware and drivers of SATA3 capable motherboards. All Crucial SSDs are fully capable of passing TRIM direction to the OS.
TRIM
TRIM occurs when the ssd clears blocks of invalid data. When you delete a file, the operating system will only mark the area of the file as free in order to trick the system into believing the space is available. Invalid data is still present in that location. Its like ripping out a Table of Contents from a book. Without this, one would not know what, if anything, is contained on the following pages. TRIM follows the process of marking the area as free by clearing the invalid data from the drive. Without this, the process of reading, identifying invalid data, deleting or moving and clearing the block before writing can actually result in performance 4 times slower than it would have normally been as a new drive.
In recently speaking with Kent Smith, Sr. Director of Product Marketing for SandForce, he identified that there are many variables outside of the hardware that are responsible for users not seeing the benefits of TRIM, the first of which are drivers at the OS level which have to be working optimally in order for TRIM to function correctly. Another example occurred with early Windows 7 users testing their newly installed drives and not seeing the benefits of TRIM. Examination of these complaints revealed that users would have originally made the Windows 7 installation on hardware that did not support TRIM and then cloned to the SSD to which TRIM was supported but would not work because of the original configuration settings. The same could be said of cloning an OS that originally had AHCI turned off followed by a clone to the SSD where TRIM was not being passed, simply because AHCI has to activated for TRIM to function.
ENHANCE SSD OVER PROVISIONING MANUALLY
In our conversation, we breached the topic of SSD capacity to Mr Smith to which he replied, “Are you trying to optimize performance or maximize capacity?” which reminded us that the main purpose of the consumers transition to SSD was to maximize their system performance. Filling a drive to capacity will hinder TRIM and GC ability which will result in performance degradation. Many drives will start to display performance changes once filled to 70% capacity. Testing has shown that the user can very simply add to the drive, especially if it is a 7% over provisioned drive, by reducing the size of the partition, the new unallocated space of which will automatically be picked up as over provisioning and benefit the SSD in many ways. This idea has been tackled by Fusion IO who includes a utility within their products that allows the user complete control of the size of their over provisioning.
OWC 120Gb SSD With 16x8Gb NAND Flash = 128Gb Total (7% OP)
Over provisioning allows more data to be moved at one time which, not only enhances GC, but also reduces write amplification to the drive. Write amplification is a bit tricky of an explanation but it is the measure of how many bytes are actually written when requiring storage of a certain number of bytes. A ratio of 1:1 would be ideal but not a reality and a typical result would be an actual size of 40kb written for a typical 4kb file. In short, maximizing over provisioning and reducing write amplification increases the performance and lifespan of the drive. Over provisioning also provides for remapping of blocks should the bad blocks be discovered during wear leveling, which unlike a hard drive, does not reduce the end user capacity of the drive. The replaced blocks simply come from the over provisioning.
http://thessdreview.com/ssd-guides/optimization-guides/ssd-performance-loss-and- its-solution/
- Reducing the time GC takes
- Increasing the amount of freespace available after a GC (which increases the time it takes for performance to degrade after a GC)
- It lets the FTL have a wider selection of pages to choose from when it when it need a new page to write to, which means it has a better chance of finding low write count pages, increasing the lifespan of the drive
Now, I want to be clear, a sufficiently clever GC on a drive that has enough reserved space might be able to do very well on its own, but ultimately what TRIM does is give a drive GC algorithm better information to work with, which of course makes the GC more effective. What I showed above was a super simple GC, real drive GCes take a lot more information into account. First off they have to deal with more than two blocks, and their data takes up more than a single page. They track data locality, they only run against blocks have hit certain threshold of invalid pages or have really bad data locality. There are a ton of research papers and patents on the various techniques they use. But they all have to follow certain rules based on on the environment they work in, hopefully this post makes some of those clear.
http://www.devwhy.com/blog/2009/8/4/from-write-down-to-the-flash-chips.html