
When online performs an archive, it mearly reads the chunk-free-list pages for all the chunks associated with the dbspace(s) that the front-end (OnBar) requested. All pages (non-blobspace) that are not marked as free according chunk-free-list, will be read in to memory. At that time online will perform 'mild' consistency checks. At this point, online will only verify that the address is what we requested and that the timestamp meets the criteria for archiving. As each valid page is processed it gets put on a trasport buffer that, when full, it is sent to the front-end. Actually, scince the transport buffer is in shared memory all that is passed is the memory address to the front-end. The front-end will then get the data from shared memory itself.
This algorithm will be changing soon, The new algorithm will also scan bitmap pages for every tablespace in that dbspace so that we can eliminate reading in free pages that are located inside of extents.
Home Index