I wrote a DFS program to batch load data into a Documentum 6.0 repository. I noticed that objects without content files load at a constant rate, at least for the first few thousands of records which is all I've tested so far. But the rate for content files got slower and slower the longer the batch ran.
I updated the program to establish a new session every 1,000 rows, and voilĂ - back to constant load times. My theory is that DFS maintains a session cache of some sort, and the longer the session runs, the larger the cache gets. Recycling the session must start off with a new cache. But all that is just a theory. No matter how it happens, just establishing a new session every 1,000 rows resolved the gradually-slower-and-slower issue.
Tuesday, January 6, 2009
Subscribe to:
Posts (Atom)