ORACLE AND AWS RDS FOR ORACLE TIP: Archive log retention and switch log file in AWS RDS Oracle

I have an issue while I was migrating my Oracle database On-Prem to AWS RDS for Oracle. My archive area went 100% full and I wasn’t enable to connect in my database.
When you create your database in AWS Cloud have in mind that you will also need to consider an additional space to your archive area. So the purpose of this post is to show to you how to check the retention of the archive logs in your RDS Oracle database plus to learn how to generate a new archive with the Amazon sintax.

As an experienced Oracle DBA , the first thing is to check the archive area, it would be also useful V$ARCHIVED_LOG to check the amount used by the archive log files in the storage or the ordinary command “archive log list” to check the location of the archive area.

However, when it comes to AWS RDS you don’t have the permission to use the command “archive log list” even if you are connected with the user owner of the RDS instance as you can see below:


SQL> archive log list
ORA-01031: insufficient privileges
SQL>

Plus, I am sorry to disappoint you but the view V$ARCHIVED_LOG doesn’t show the reality of the amount of archives in your RDS instance:


SQL>  select nvl(sum(BLOCKS * BLOCK_SIZE),0)/1024/1024/1024 GB from V$ARCHIVED_LOG  where DEST_ID=1 and ARCHIVED='YES' and DELETED='NO';
GB
1129.10634

To get to know the correct size use the table-format provided by the RDS instance:

First create the RDS directory:


SQL>  exec rdsadmin.rdsadmin_master_util.create_archivelog_dir;

PL/SQL procedure successfully completed.

Then use the right table-format:


SQL> select sum(FILESIZE)/1024/1024 ARCH_MB from table (rdsadmin.rds_file_util.listdir(p_directory => 'ARCHIVELOG_DIR')) order by mtime;
 ARCH_MB
.037597656

According to Amazon,the archive logs will be copied to an S3 bucket, and then they will be purged instantly every five minutes.

Another difference is regarding the creation of the archive log file. Those that work with Oracle know that if you execute the command “alter system switch logfile;” you will create a new archive log file. However, this command it is different in the AWS RDS for Oracle as you can see below:


SQL>  exec rdsadmin.rdsadmin_util.switch_logfile;

PL/SQL procedure successfully completed.

Keep in mind that if you configured your AWS RDS Oracle instance to perform automated backups and snapshots they are stored in S3 and to check the actual retention of your archives you have to create an RDS procedure:


SQL> set serveroutput on
exec rdsadmin.rdsadmin_util.show_configuration

The output will be something like this:


SQL> set serveroutput on
exec rdsadmin.rdsadmin_util.show_configuration
SQL> NAME:archivelog retention hours
VALUE:2
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before
archive/redo log files are automatically deleted.
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before
tracefiles in bdump are automatically deleted.

PL/SQL procedure successfully completed.

Now suppose that I want to change my retention to 240 instead of 2 hours:


SQL> begin
rdsadmin.rdsadmin_util.set_configuration(
name  => 'archivelog retention hours',
value => '240');
end;
/  2    3    4    5    6
1 rows updated

PL/SQL procedure successfully completed.

SQL> COMMIT;

Commit complete.

Now check the new retention value:


SQL> set serveroutput on
exec rdsadmin.rdsadmin_util.show_configurationSQL>
NAME:archivelog retention hours
VALUE:240
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before
archive/redo log files are automatically deleted.
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before
tracefiles in bdump are automatically deleted.

PL/SQL procedure successfully completed.

Related posts

2 Thoughts to “ORACLE AND AWS RDS FOR ORACLE TIP: Archive log retention and switch log file in AWS RDS Oracle”

  1. Jim Dickson

    Bruno
    “According to Amazon,the archive logs will be copied to an S3 bucket, and then they will be purged instantly every five minutes.”

    Do you have any reference to AWS docs for the above?
    We are seeing unexpected behaviour in retention of archive logs.

    1. brunors

      Hello Jim Dickson,
      Thank you for your comment. I would suggest you to follow any of the topics below:
      See if you do not have any Object Expiration rule : https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html
      Check lifecycle configuration rule on your S3: https://aws.amazon.com/premiumsupport/knowledge-center/s3-empty-bucket-lifecycle-rule/
      Check and implement Amazon S3 Event Notifications: https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
      Regarding your specific questions, I think I got that information while working with Amazon support services but I still think that have to be with the current life cycle configuration I had back then. Additionally, I would also suggest you check https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html and https://docs.aws.amazon.com/AmazonS3/latest/userguide/empty-bucket.html.
      I believe that by checking the configuration of your S3 bucket you get closer to the issue you are facing. In addition, I would also open a service request at the provider and therefore they can have a better overview of your configuration and explain such behavior.

      Kind regards, Bruno Reis.

Leave a Comment