Modeling polarization for Hyper-NA lithography tools and masks
Kafai Lai, Alan E. Rosenbluth, et al.
SPIE Advanced Lithography 2007
The Hurricane File System (HFS) is designed for (potentially large-scale) shared-memory multiprocessors. Its architecture is based on the principle that, in order to maximize performance for applications with diverse requirements, a file system must support a wide variety of file structures, file system policies, and I/O interfaces. Files in HFS are implemented using simple building blocks composed in potentially complex ways. This approach yields great flexibility, allowing an application to customize the structure and policies of a file to exactly meet its requirements. As an extreme example, HFS allows a file's structure to be optimized for concurrent random-access write-only operations by 10 threads, something no other file system can do. Similarly, the prefetching, locking, and file cache management policies can all be chosen to match an application's access pattern. In contrast, most parallel file systems support a single file structure and a small set of policies. We have implemented HFS as part of the Hurricane operating system running on the Hector shared-memory multiprocessor. We demonstrate that the flexibility of HFS comes with little processing or I/O overhead. We also show that for a number of file access patterns, HFS is able to deliver to the applications the full I/O bandwidth of the disks on our system. Categories and Subject Descriptors: D.4.3 [Operating Systems]: File Systems Management-access methods; file organization; D.4.8 [Operating Systems]: Performance-measurements; E.5 [Data]: Files-optimization; organization/structure.
Kafai Lai, Alan E. Rosenbluth, et al.
SPIE Advanced Lithography 2007
Rafae Bhatti, Elisa Bertino, et al.
Communications of the ACM
Hang-Yip Liu, Steffen Schulze, et al.
Proceedings of SPIE - The International Society for Optical Engineering
Sai Zeng, Angran Xiao, et al.
CAD Computer Aided Design