I have a schema estimated to be 220GB in size with 2 huge tables comprising around 100 million rows and it's not feasible to perform a migration all in one go or via tts method. Therefore I'd like to split the export via x number of rows by exporting for example 10 million rows at a time, which will leave me with 10 huge export dump files to ship across to the new server and import.
Can I do something like this:
- Code: Select all
expdp schemas=schema include=TABLE:"IN('table')" QUERY=\"schema.table:"where rownum <=10000000\" dumpfile=01.dmp =blah blah...." (first 10 million rows)
expdp schemas=schema include=TABLE:"IN('table')" QUERY=\"schema.table:"where rownum >=10000001 but less than 20000000\" dumpfile=02.dmp blah blah...." (from 10 mill +1 to 20 mill)
expdp schemas=schema include=TABLE:"IN('table')" QUERY=\"schema.table:"where rownum <=20000000 but less than 30000000\" dumpfile=03.dmp blah blah...." (from 20 million +1 to 30 million)
And so on until I have taken 10 exports up to 100 million rows
I have 2 questions
1)I do not believe I have the correct syntax in the 2nd and 3rd of my exports - can you provide your input for the correct syntax
2) What happens if there are 93 million rows? During my final export, will the export complete error free or will it give an error because I specified up to 100000000?
Hope I have made myself clear, basically want to split the schema/table export into smaller chunks so I can easily migrate to the new env
Many thanks for any input you have Tim,