2012年3月26日星期一

Moncler OnlineA brief introduction of GridFSGridFS in MongoDB is a built-in function

A brief introduction of GridFSGridFS in MongoDB is a built-in function ,Moncler Coats,and can be used for storing a large number of small files .GridFS MongoDB provides a command-line tool mongofiles to process GridFS ,bin directory .
List all files: mongofileslist upload a file to download a file :mongofilesputxxx.txt :mongofilesgetxxx.txt :/ / mongofilessearchxxx document search will find all file name contains XXX file mongofileslistxxx / / will find all the file name to XXX prefix files parameter description :- D database ,the default is FS ,Mongofileslist - dtestGridfs - U - P specifies the user name ,password - H - port specifies the specified host host port C specified set name ,default is FS - t specifies the MIME type ,the default will ignore the use MongoVUE to view ,manage GridFS MongoVUE address :MongoVUE is a free software ,but more than 15 days after the function .
Can be achieved by removing the following registry key to the lifting of restrictions : Software Classes CLSID {B1159E65-821C3-21C5-CE21-34A484D54444 } put this under 20540 & # ;erase it .
Java driving upload and download files to download address ::Official Document appearance & # 20284 ;is not new ,but by looking at the API to use not trapped you .The following code based on the mongo-2.
7.3.jar import java.io.FileInputStream ;import java.io.IOException ;import java.io.InputStream ;import java.net.UnknownHostException ;import java.security.NoSuchAlgorithmException ;import com.
mongodb.BasicDBObject ;import com.mongodb.DB ;import com.mongodb.DBCollection ;import com.mongodb.DBObject ;import com.mongodb.Mongo ;import com.mongodb.MongoException ;import com.mongodb.gridfs.
GridFS ;import com.mongodb.gridfs.GridFSDBFile ;import com.mongodb.gridfs.GridFSInputFile ;public class Test {Mongo connection ;DB db ;DBCollection collection ;GridFS myFS ;String mongoDBHost = " ;127.
0.0.1" ;String ;int mongoDBPort = 27017 ;dbName = " ;testGridfs" ;String ;collectionName = " ;fs" ;public ;static void main ( String args ) throws MongoException ,IOException ,NoSuchAlgorithmException {Test t = new Test ( ) ;String fileName = " ;F: CPU.
txt" ;String ;name = " ;CPU.txt" ;;/ / put the files in the gridfs ,and to document the MD5 value for the idt.save ( New FileInputStream ( fileName ) ,name ) ;/ / according to the text A read from gridfs to file GridFSDBFile gridFSDBFile = t.
getByFileName ( name ) ;if ( gridFSDBFile != null ) {System.out.println ( " ;filename: " ;gridFSDBFile.getFilename ( ) ) ;System.out.println ( " ;MD5: " ;gridFSDBFile.
getMD5 ( ) ) ;System.out.println ( " ;length: " ;gridFSDBFile.getLength ( + ) ) ;System.out.println ( " ;uploadDate: " ;gridFSDBFile.getUploadDate ( ) ) ;System.out.
println ( " ;" -------------------------------------- ;) ;gridFSDBFile.writeTo ( System.out ) ;} else {System.out.println ( " ;can not get file by name: " ;name ) ;} } public Test ( ) throws UnknownHostException ,MongoException ,NoSuchAlgorithmException { _ init ( ) ;} public Test ( String mongoDBHost ,int mongoDBPort ,String dbName ,String collectionName ) throws UnknownHostException ,MongoException ,NoSuchAlgorithmException {this.
mongoDBHost = mongoDBHost ;this.mongoDBPort = mongoDBPort ;this.dbName = dbName ;this.collectionName = collectionName init ( ) ;} ;_ private void init throws UnknownHostException _ ( ) ,MongoException ,NoSuchAlgorithmExc Eption {connection = new Mongo ( mongoDBHost ,mongoDBPort ) ;DB = connection.
getDB ( dbName ) ;collection = db.getCollection ( collectionName ) ;myFS = new GridFS ( DB ) ;} / * * * with the given ID ,save the file ,transparent pre-existing condition * ID can be string ,long ,int ,org.
bson.types.ObjectId type * @ param in * @ param ID * / public void save ( InputStream in ,Object ID ) {DBObject query = new BasicDBObject ( " ;id" ,ID _ ;) ;GridFSDBFile gridFSDBFile = myFS.
findOne ( query ) ;if ( gridFSDBFile != null ) return ;GridFSInputFile gridFSInputFile = myFS.createFile ( gridFSInputFile.save ( in ) ;) ;return ;} / * * * according to ID file * @ param ID * @ return * / public GridFSDBFile getById ( Object ID ) {DBObject query = new BasicDBObject ( " ;id" ,ID _ ;) ;GridFSDBFile gridFSDBFile = myFS.
findOne ( query ) ;return gridFSDBFile ;} / * * * according to the file name returns a file ,only to return to the first * @ param fileName * @ return * / public GridFSDBFile getByFileName ( String fileName ) {DBObject query = new BasicDBObject ( " ;filename" ;gridFSDBFile = GridFSDBFile ,fileName ) ;MyFS.
findOne ( query ) ;return gridFSDBFile ;} } nginx-gridfs module installation project address :through nginx-gridfs ,can be directly used to access the file in http GridFS .1 installation of various dependencies :zlib ,Moncler Online,PCRE ,OpenSSL in Ubuntu may be the following command: sudoapt-getinstallzlib1g-dev: / / appearance & # 20284 ;sudoapt-getinstallzlib-dev cannot install sudoapt-getinstalllibpcre3libpcre3-dev sudoapt-getinstallopenssllibssl-dev git installation (omitted ) with git download nginx-gridfs Code :gitclonegit: / / github.
com / mdirolf / nginx-gridfs.git cdnginx-gridfs gitsubmoduleinit gitsubmoduleupdate nginx : WGet tarzxvfnginx-1.0.12.zip cdnginx-1.0.12 download ./ configure--add-module = < ;nginx-gridfs > make sudomakeinstall path ;if the compiler error ,in the configure with the -- with-cc-opt = - Wno-error parameter .
2 nginx configuration in server configuration with the following location / pics / {gridfspics field = filename type = string ;mongo127.0.0.1: 27017 ;} the above configuration said :the database is pics, filename to access the files by file name ,filename is of type string currently only supports the ID and filename to access file .
Start the nginx : / usr / local / nginx / SBIN / nginx MongoVUE put a picture of the 001.jpg upload to the database in pics .Open :if successful ,you can see the display picture .The shortcomings of 3.
nginx-gridfs HTTP did not achieve rangesupport ,or HTTP, slice the download function .GridFS realization of the principle of GridFS in the database, use the default fs.chunks and fs.files to store files .
The fs.files collection storage file, fs.chunks file storing data .A fs.files set a record in the following ,namely a file information is as follows :id" {" :ObjectId ;_ ;( " ;4f4608844f9b855c6c35e298" ;ID ) ,Coach Outlet,/ / only ,can be a user-defined type :" ;filename" ;" ;CPU.
txt" ;/ / file name ," ;length" ;:778 ,Beats By Dre Sale,/ / file length " ;chunkSize" ;/ / chunk :262144 ," :ISODate uploadDate" ;size ;( " ;2012-02-23T09: 36: 04.593Z" ;) ,/ / upload time " ;md5" ;e2c789b036cfb3b848ae39a24e795ca6" :" ;;/ / file ,MD5 " ;contentType" :" ;text ;/ plain" ;/ / file MIME type " ;meta" ;null :/ / documents and other information ,default is not meta the key ,users can define their own for any BSON object } corresponds to the fs.
chunks chunk as follows :id" {" :ObjectId ;_ ;( " ;4f4608844f9b855c6c35e299" ;) ,/ the id" / chunk ;files_id" :ObjectId ;( " ;4f4608844f9b855c6c35e298" ;) ,/ / file ID ,corresponding to the objects in the fs.
files fs.files collection ,equivalent to the foreign " ;n" :0 ,/ / file where the chunk block ,if the file is larger than chunksize, will be split into multiple chunk block " ;data" :BinData ;( " ;0 ,Moncler Outlet Store,QGV.
.. " ;) / / the binary data file, omitted here the specific content } default chunk size 256K .PublicstaticfinalintDEFAULT_CHUNKSIZE = 256 * 1024 ;when the file into the GridFS process, if the file is larger than chunksize ,then the document is divided into a plurality of chunk ,then the chunk save to fs.
chunks ,then the file information into the fs.files .In the read document, first according to the query conditions ,in fs.files to find a suitable recording ,_ ID & # ,20540 ;according to the 20540 fs.
chunks & # ;to find all files_id for ID chunk _ ,and press sort ,finally in order to read the chunk data object content ,restored to the original file .The custom Gridfs hash function despite theoretically ,no matter what the hash function ,there may be hash & # 20540 ;the same ,but the content is not the same file ,but for GridFS use the default MD5 algorithm ,has the length and MD5 & # 20540 ;are the same but the content is not the same file .
If you want to own to other hash algorithms ,that can be driven from proceed with .Because the GridFS in the MongoDB actual just two ordinary set ,so it can modify their own drive ,replaced by hash algorithm can .
The current version of the Java driver is relatively simple ,can be easily modified to achieve .Note however, that does not conform to the GridFS specification .Note 1.GridFS MD5 does not automatically processing the same file ,MD5 for the same file ,if you want to GridFS in only one store ,to the user their own processing .
Md5 & # 20540 ;calculation by the client to complete .2 because the GridFS file upload process is to put the file data to fs.chunks ,then the file information stored in fs.files ,so if the upload files in the process of failure, there may appear in the fs.
Related articles:

没有评论:

发表评论