We often get asked "Should I use GridFS for file storage with MongoDB". As with most things, the answer is a staunch "it depends".
GridFS looks like a great idea on paper – a virtual filesystem held within MongoDB which allows for larger than 16MB files to be held, synced and replicated. It's very tempting when architecting your solutions to want to consider using GridFS. It appears to be able to take on the problem of storing many thousands or millions of files without consuming file-system resources where there are often hard limits on the number of file names. It also seems to allow for massive files to be stored without any obvious downsides.
It is important, though, to know what GridFS is under the hood. For any file being stored with GridFS, the file is chopped into 255KB chunks. Those chunks are saved in a bucket, called fs, and a collection in that bucket,
fs.chunks. Metadata about the files is stored in another collection in the same bucket,
fs.files, though you can have more buckets with different bucket names in the same database. An index makes retrieving the chunks quick. All this chunking and metadata management is not done by the MongoDB database though. It is a task performed by the client's driver which is then wrapped in a GridFS API for that driver.
When you put or retrieve a large file of size nKB, in its entirety, what is happening is the driver is retrieving all the relevant chunks, all nKB/255KB chunks, as documents, assembling them at the client end and writing them out to wherever they are needed. So a 16MB file is retrieved as 65 documents of 255K each. Consider what would happen if you did that regularly on your MongoDB database outside of GridFS; there would be severe competition for the servers RAM between those documents and the rest of the database.
The chunking with GridFS and the fact that it is done by the driver also means that large operations like replacing an entire file within GridFS are not atomic and there's no built in versioning to fall back on. This may, or may not, be a problem for some applications where files are concurrently accessible by many users or their applications. You can, though, work around this by layering on your own versioning scheme over GridFS, only making replacement files the latest version when they have completed writing.
There is an upside of to chunking though – it is remarkably cheap to access particular sections of files because they are broken up into manageable blocks during the chunking so if you need to access particular parts of large binary files, you won't be pushing the working set out of memory in the server. You can also adjust the chunk size so if your application would work better with a smaller or larger chunk, you can tune your GridFS usage by requesting a particular size of chunk. Smaller chunks would push fewer parts of the working set out of memory, larger chunks less so.
You can avoid the entire issue of contention with your working set of data by having another MongoDB server dedicated to GridFS storage and optimized towards your file storage use patterns. This also lets you focus on tuning the best performance out of your core database instance without having to look over your shoulder for the march of the GridFS files through your working set. With MongoHQ, creating a separate database instance is easier than ever and there'll be a plan to suit your needs.
If you are dealing with files less than 16MB and want to handle them as atomic entities, it is also worth considering whether you need to use GridFS at all because you can have MongoDB documents with 16MB fields. You will have to ensure that when reading the database, you only pull the large fields into memory when you need to, but this does give you the atomic replacement writes and architecturally simpler system you may desire. The down side is that access to binary ranges within the file will likely require downloading the file, modifying it and re-writing it, but it is all about balance and matching your file storage and use patterns.
So, as we said at the beginning, "it depends" if GridFS is a good fit for your application's file storage needs. There are some pitfalls which you can avoid at planning time with some estimates of what quantity of file data you want to store and how you want to access it. There are also many benefits to a MongoDB and GridFS solution, especially in terms of replication and synchronisation.