

My gut feeling however says, that it's not the compression optimisation. Why your example table changed all that much would need to be looked at in detail. But it will be done if the table contents massively fluctuates. The automatic compression optimisation therefore is calibrated to be rather not too proactive, in order to not disturb system operations all too much. It's the change of data distribution in each column of a table and relative to each other that makes the difference here. It takes time to do that and uses resources that could be used for business transaction instead.Īlso, it's not possible upfront to determine if and how large a benefit would actually turn out to be.īesides, it's not so much the pure change of data volume that leads to compression methods not being optimal anymore. The obvious reason for not permanently trying to use the best possible compression is of course: that's computationally expensive. So, just in case anyone uses the script and gets in trouble: I told you so. Knowing the effects of such recommendations, I can see another wave of support tickets running at the colleagues in SAP HANA support.Īnyway - now that this little loop script has been published users will use it forever and likely not heed any further warnings (like this current one). If it actually reaches 30% on a regular basis is questionable though.Īnd since the script just applies a set of operations without actually analysing the current state of the system it really is a sledgehammer approach and reminds me a lot of the recommendations for index-rebuilds, defrags and CBO stats-recollections that where/are so common for e.g. Of course the script will lead to a reduction of space usage. This should actually not be the case for normal administrators.
#Praat formula reduce db update
Technically you could actually restart SAP HANA during any of the operations, but that will of course require extended startup times due to the required recovery.įor the script itself, in order to run, the user running it must have the UPDATE privilege on the tables of the underlying schema. Since some of the activities will impose locks on various levels (table locks and system wide savepoint locks) running it during production hours, might end up in a halted system.

Running it on a test/QA system will only allow a time estimation if the system nearly has the same data and similar load during the run. While you put the caveats in red letters to provide warning, I'd say that's not really providing enough information on the operations executed by the script. I consider this type of blog "selling snake-oil" or the "magic bullet" (or whatever you want to call the miraculous thing to improve your situation). They definitively add to this community, so I am happy to have you blogging here again. call “_SYS_BIC”.”dba_reorg”(‘INSERT_SCHEMA_NAME_HERE’) ĬREATE PROCEDURE “_SYS_BIC”.”dba_reorg”(IN pi_schema VARCHAR(60)) LANGUAGE SQLSCRIPT ASĮXEC ‘UPDATE “‘ || :pi_schema || ‘”.”‘ || cur_tablename.TABLE_NAME || ‘” WITH PARAMETERS (‘OPTIMIZE_COMPRESSION’=’FORCE’)’ ĮXEC ‘ALTER SYSTEM RECLAIM VERSION SPACE’ ĮXEC ‘ALTER SYSTEM RECLAIM DATAVOLUME 110 DEFRAGMENT’ įirst of all: thanks for sharing your experiences with SAP HANA. After creating the procedure you can run it with this statement: Thanks to Lloyd Palfrey, who actually wrote it.
#Praat formula reduce db full
