Skip to Content
Custom Search

Reply to comment

Joined: 07/07/2011

When a file is physically read or written it is not read or written a record at a time but a blocks at a time. This means that although in a program we might read a record, but the current record was physically read with a number of other records, and the read statement in our program just causes the current record in the current block to be passed to our program. If the record size and the block size are the same, then every time we read or write a record a physical read or write must happen. This can slow a program down dramatically from a few minutes to a few hours.

To optimize the process, it is best for a block to contain as many records as it possibly can, meaning that fewer physical read or writes will happen.

Physically a disk is organised into cylinders and tracks. Depending on the Disk type, the number of tracks or cylinders will differ. The largest physical read that can happen at any time is to read one track at a time.

In the past we had to determine the optimal blocksize by the disk type and the record size. So if a track could contain 32000 bytes and a record was 100 bytes long, the block size would be 32000/100 = 320. (320 records per block).

Today, there are to many different disk types out there, and since everything is managed by the system we can force the system to create the Blocksize optimally at the time the file is created without having to care about the disk/device type that the file is being created on. All we need to do is stipulate BLKSIZE=0.


  • Allowed HTML tags:
      1. Lines and paragraphs break automatically.

    More information about formatting options

    This question is for testing whether you are a human visitor and to prevent automated spam submissions.


Click the +1 button  below to share it with your friends and colleagues


Share this if you liked it!



Number of Registered users 713
Theme by Dr. Radut.