User Tools

Site Tools


storage_management

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
storage_management [2022/02/11 19:23]
yves
storage_management [2023/11/21 13:50] (current)
chris
Line 4: Line 4:
  
   * Variety - Imaging data in pathology is generated during biopsies (macroscopic observations on the sectioning station), brightfield microscopy (high-resolution),​ immuno observations (multiple channels), and z-stacking.   * Variety - Imaging data in pathology is generated during biopsies (macroscopic observations on the sectioning station), brightfield microscopy (high-resolution),​ immuno observations (multiple channels), and z-stacking.
-  * Volume - The recorded images are large: think 100k x 50k pixels. Sometimes ​in 16-bit RGB color resolution. An individual slide can be anywhere between ​100 MB in size (a needle biopsy ​e.g.), or several GB in size (a solid tumor section ​samples ​scanned at 40X magnification) +  * Volume - The recorded images are large: think 100k x 50k pixels in 16-bit RGB color resolution. An individual slide can be anywhere between 100 MB (a needle biopsy), or several GB in size (a solid tumor section ​sample ​scanned at 40X magnification) 
-  * Velocity - Data comes in rapidly, with 100s of slides being scanned on a daily basis. This poses challenges in terms of how much pre-treatment and time you can spent on any individual ​slides.+  * Velocity - Data comes in rapidly, with 100s of slides being scanned on a daily basis. This poses challenges in terms of how much pre-treatment and time you can spent on any individual ​slide.
  
 For these reasons it's important to have tile server solution that is flexible. For these reasons it's important to have tile server solution that is flexible.
  
-PMA.core supports the following storage media:+PMA.core supports the following ​[[rootdir_config#​adding_mounting_points|storage media]]:
  
-  * local hard disk (think ​of you conventional ''​C:''​ and ''​D:''​ drives and partitions)+  * local hard disk (think conventional ''​C:''​ and ''​D:''​ drives and partitions)
   * network storage like SMB shares (must be accessible via UNC ''​%%\\server\path\to\data%%''​ routes)   * network storage like SMB shares (must be accessible via UNC ''​%%\\server\path\to\data%%''​ routes)
-  * S3-compliant cloud storage (Amazon AWS, Western Digital HGST, NetApp, Arvados, IBM...) +  * [[rootdir_s3|S3-compliant cloud storage]] (Amazon AWS, Western Digital HGST, NetApp, Arvados, IBM...) 
-  * Microsoft Azure storage +  * [[rootdir_azure|Microsoft Azure storage]] (including [[rootdir_azure#​data_lake_gen2_storage|Data Lake Gen 2]]) 
-  * FTP server (yup, that [[https://​www.filezilla.org|free FileZilla File Transfer Protocol server]] is still around and can be now put to new uses for digital pathology applications!)+  * FTP server (yup, that [[https://​www.filezilla.org|free FileZilla File Transfer Protocol server]] is still around and can be now put to work on new digital pathology applications!)
  
-Our tile server introduces [[rootdir|root directories]]:​ virtual mounting points that can point to any of these types of storagewhere you have your slides available.+Our tile server introduces [[rootdir|root directories]]:​ virtual mounting points that can point to any of these types of storage where you have your slides ​and make them available ​to end users.
  
 Most importantly,​ you can configure your root-directories in a hybrid fashion, with some storage pointing to traditional hard disks, and other (perhaps long term) storage pointing to cloud resources. Most importantly,​ you can configure your root-directories in a hybrid fashion, with some storage pointing to traditional hard disks, and other (perhaps long term) storage pointing to cloud resources.
  
-This hybrid configuration model also means you can scale easily over time: you can start with a setup whereby your slides are mostly placed on a (big) local hard disk. After a while, you switch over to your organization'​s network storage. ​Even at a later stage, you can transparently ​migrate to S3-compliant cloud storage. When you have an external collaborator ​that temporarily wants to share their slide collection with you, you can ask them to setup an FTP server and patch a root-directory through to that one.+This hybrid configuration model also means you can scale easily over time: you can start with a setup whereby your slides are mostly placed on a (big) local hard disk. After a while, you can seamlessly ​switch over to your organization'​s network storage. ​At an even later stage, you can fluidly ​migrate to S3-compliant cloud storage. When you have an external collaborator ​who temporarily wants to share their slide collection with you, you can ask them to setup an FTP server and patch a root-directory through to it.
  
-[[rootdir|Root-directory resources]] can have authentication and impersonation information attached to them. In addition, PMA.core has its own [[acl|access control lists]] to determine what [[groups]] and [[user_management|individual users]] can see and do (according to [[crud|the CRUD principle]]).+[[rootdir|Root-directory resources]] can have authentication and impersonation information attached to them. In addition, PMA.core has its own [[acl|access control lists]] to determine what [[user_groups]] and [[user_management|individual users]] can see and do (according to [[crud|the CRUD principle]]).
  
 A comprehensive blog article on the subject of storage and image management is provided at [[https://​realdata.pathomation.com|our blog]]. A comprehensive blog article on the subject of storage and image management is provided at [[https://​realdata.pathomation.com|our blog]].
  
storage_management.1644596595.txt.gz · Last modified: 2022/02/11 19:23 by yves