Set up a friendly environment to share my understanding and ideas about Oracle / Oracle Spatial database administration, ESRI ArcSDE Geodatabase administration and UNIX (Solaris) operating system.
2048 to 32768, but your operating system may have a narrower range
Basic
Yes
Real Application Clusters
You must set this parameter for every instance, and multiple instances must have the same value.
Caution:
Set this parameter at the time of database creation. Do not alter it afterward.
DB_BLOCK_SIZE specifies (in bytes) the size of Oracle database blocks. Typical values are 4096 and 8192. The value of this parameter must be a multiple of the physical block size at the device level.
The value for DB_BLOCK_SIZE in effect at the time you create the database determines the size of the blocks. The value must remain set to its initial value.
For Real Application Clusters, this parameter affects the maximum value of the FREELISTS storage parameter for tables and indexes. Oracle uses one database block for each freelist group. Decision support system (DSS) and data warehouse database environments tend to benefit from larger block size values.
Note:
Guidelines for DB_BLOCK_SIZE (Oracle Metalink note 34020.1)
The setting of the parameter DB_BLOCK_SIZE depends on the type of data you are looking at, the application and implementation. The main considerations are as follows:
1. Balance with Operating System Block Size.
- For good performance, Oracle Block size should be made equal to or a multiple of the operating system blocksize.
- If your rows are small and you use a large blocksize, when you fetch a block you may get lots of rows that you are (probably) not interested in. The operating system has to do more work to get the row(s) that you are interested in.
- Larger blocks also mean more rows cached for the same number ofDB_BLOCK_BUFFERS (taking up more memory).
2. Larger Oracle blocksizes may give better index balancing with more rows in each branch.
3. Remember that with more processes looking at the same block you are more likely to get block contention.
4. If your rows are comparatively large then you may need a large blocksize to (possibly) prevent chaining.
There is not too much we can do when import is unbearably slow. But there are some things that can be done to shorten the time it will take.
System Level Changes
1.Create and use one large rollback segment, take all other rollback segments offline. One rollback segment approximately 50% of the size of the largest table being imported should be large enough. Import basically does 'insert into tabX values (',,,,,,,')' for every row in your database, so the rollback generated for insert statements is only the rowid for each row inserted. Also create the rollback with the minimum 2 extents of equal size.
2.Put the database in NOARCHIVELOG mode until the import is complete. This will reduce the overhead of creating and managing archive logs.
3.As with the rollback segment, create several large redo log files, the larger the better. The larger the log files, the fewer log switches that are needed. Check the alert log for messages like 'Thread 1 cannot allocate new log, sequence 17, Checkpoint not complete'. This indicates the log files need to be bigger or you need more of them.
4.If possible, make sure that rollback, table data, and redo log files are all on separate disks. This increases throughput by reducing I/O contention.
5.Make sure there is no IO contention occurring. If possible, don't run other jobs which may compete with the import for system resources.
6.Make sure there are no statistics on data dictionary tables.
7.Check the sqlnet.ora in the TNS_ADMIN location. Make sure that TRACE_LEVEL_CLIENT = OFF
8.Increase DB_BLOCK_SIZE when recreating the database, if possible. The larger the block size, the smaller the number of I/O cycles needed. This change is permanent, so consider all effects it will have before changing it.
Init.ora Parameter Changes
1.Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size of the redo log files. This number is in OS blocks (512 bytes on most Unix systems). This reduces checkpoints to a minimum (only at log switch time).
2.Increase SORT_AREA_SIZE. Indexes are not being built yet, but any unique or primary key constraints will be. The increase depends on what other activity is on the machine and how much free memory is available. Try 5-10 times the normal setting. If the machine starts swapping and paging, you have set it too high.
3.Try increasing db_block_buffers and shared_pool_size. Shared pool holds cached dictionary info and things like cursors, procedures, triggers, etc. Dictionary information or cursors created on the import's behalf (there may be many since it's always working on a new table) may sometimes clog the pipes. Therefore, this stale stuff sits around until the aging/flush mechanisms kick in on a per-request basis because a request can't be satisfied from the lookaside lists. The ALTER SYSTEM FLUSH SHARED_POOL throws out *all* currently unused objects in one operation, hence, defragments the pool.
4.If you can restart the instance with a bigger SHARED_POOL_SIZE prior to importing, that would definitely help.
Import Options Changes
1.Use COMMIT=N. This will cause import to commit after each object (table), not after each buffer. This is why one large rollback segment is needed.
2.Use a large BUFFER size. This value also depends on system activity, database size, etc. Several megabytes is usually enough, but if you have the memory some can go higher. Again, check for paging and swapping at the OS level to see if it is too high. This reduces the number of times the import program has to go to the export file for data. Each time it fetches one buffer's worth of data.
3.Consider using INDEXES=N during import. The user defined indexes will be created after the table has been created and populated, but if the primary objective of the import is to get the data in there as fast as possible, then importing with INDEXES=N will help. The indexes can then be created at a later date when time is not a factor. If this approach is chosen, then you will need to use INDEXFILE option to extract the DLL for the index creation or to re-run the import with INDEXES=Y and ROWS=N.
Large Imports of LOB Data:
1.Eliminate indexes. This affects total import time significantly. The existence of LOB data requires special consideration. The LOB locator has a primary key that cannot be explicitly dropped or ignored during the import process.
2.Make certain that sufficient space in sufficiently large contiguous chunks is available to complete the data load. The following should provide an accurate image of the space available in the target tablespace:
1.Importing a table with a LONG column may cause a higher rate of I/O and disk utilization than importing a table without a LONG column.
2.There are no parameters available within IMP to help optimize the import of these data types.
REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times the export time.
Generally speaking, a good formula for determining a target elapsed time for a table import versus the elapsed time for the table export is: import elapsed time = export elapsed time X 4
I reserve the right to delete comments that are not contributing to the overall theme of the BLOG or are insulting or demeaning to anyone. The posts on this BLOG are provided "as is" with no warranties and confer no rights. The opinions expressed on this site are mine and mine alone, and do not necessarily represent those of my employer.