正德厚生,臻于至善

Oracle导致Redo日志暴增的SQL语句排查

Oracle导致Redo日志暴增的SQL语句排查
一、概述
最近数据库频繁不定时的报出一些耗时长的SQL,甚至SQL执行时间过长,导致连接断开现象。下面是一些排查思路。

二、查询日志的大小,日志组情况
SELECT L.GROUP#,
       LF.MEMBER,
       L.ARCHIVED,
       L.BYTES / 1024 / 1024 "SIZE(M)",
       L.MEMBERS
FROM V$LOG L,
     V$LOGFILE LF
WHERE L.GROUP# = LF.GROUP#;

 从上图可以看出目前共分为10个日志组,每个日志组2个文件,每个文件大小为3G。

三、查询Oracle最近30天每小时归档日志产生数量
set lines 200 pages 50000
column h0 format 999
column h1 format 999
column h2 format 999
column h3 format 999
column h4 format 999
column h5 format 999
column h6 format 999
column h7 format 999
column h8 format 999
column h9 format 999
column h10 format 999
column h11 format 999
column h12 format 999
column h13 format 999
column h14 format 999
column h15 format 999
column h16 format 999
column h17 format 999
column h18 format 999
column h19 format 999
column h20 format 999
column h21 format 999
column h22 format 999
column h23 format 999
column avg format 999.99
column day format a6

SELECT TRUNC (first_time) "Date", TO_CHAR (first_time, 'Dy') "Day", COUNT (1) "Total",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) h0,
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23", to_char(ROUND (COUNT (1) / 24, 2),'fm99999999990.00') "Avg"
FROM gv$log_history
WHERE first_time >= trunc(SYSDATE) - 30
and thread# = inst_id
GROUP BY TRUNC (first_time), TO_CHAR (first_time, 'Dy')
ORDER BY 1 DESC;

从上图可以看出业务高峰期每小时产生40个日志文件左右(目前设定的每个日志文件大小为3G),平均1.5分钟产生一个3G的日志文件。而oracle官方建议Redo日志平均30分钟切换一次最好。

四、查看最近2小时"块改变"最多的segment
redo大量产生必然是由于大量产生"块改变"。从awr视图中找到"块改变"最多的segment。

这是查询最近2小时(120分钟)的,begin_interval_time> sysdate - 120/1440,大家也可以自定义修改查询最近多少分钟的。
set lines 200 pages 50000
col object_name for a30
select *
from (SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI') snap_time,
             dhsso.object_name,
             SUM(db_block_changes_delta)
      FROM dba_hist_seg_stat dhss,
           dba_hist_seg_stat_obj dhsso,
           dba_hist_snapshot dhs
      WHERE dhs.snap_id = dhss.snap_id
        AND dhs.instance_number = dhss.instance_number
        AND dhss.obj# = dhsso.obj#
        AND dhss.dataobj# = dhsso.dataobj#
        AND begin_interval_time > sysdate - 120 / 1440
      GROUP BY to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'), dhsso.object_name
      order by 3 desc)
where rownum <= 10;

SNAP_TIME        OBJECT_NAME            SUM(DB_BLOCK_CHANGES_DELTA)
---------------- ---------------------- ---------------------------
2026_03_23 12:00 CUX_28_PO_LIMS_HIS_T                     233906784
2026_03_23 14:00 CUX_28_PO_LIMS_HIS_T                     211559584
2026_03_23 11:00 CUX_28_PO_LIMS_HIS_T                     209755552
2026_03_23 12:30 CUX_28_PO_LIMS_HIS_T                     209172096
2026_03_23 11:30 CUX_28_PO_LIMS_HIS_T                     206936032
2026_03_22 18:30 CUX_28_PO_LIMS_HIS_T                     206140784
2026_03_23 13:30 CUX_28_PO_LIMS_HIS_T                     192636688
2026_03_22 20:00 CUX_28_PO_LIMS_HIS_T                     192298768
2026_03_22 23:30 CUX_28_PO_LIMS_HIS_T                     190108304
2026_03_23 10:30 CUX_28_PO_LIMS_HIS_T                     190101344

10 rows selected.

五、从awr视图中找出步骤四中排序靠前的对象涉及的SQL
说明:LIKE '%CUX_28_PO_LIMS_HIS_T%'中CUX_28_PO_LIMS_HIS_T是步骤1中查询出来的OBJECT_NAME

SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'),
       dbms_lob.substr(sql_text, 4000, 1),
       dhss.instance_number,
       dhss.sql_id,
       executions_delta,
       rows_processed_delta
FROM dba_hist_sqlstat dhss,
     dba_hist_snapshot dhs,
     dba_hist_sqltext dhst
WHERE UPPER(dhst.sql_text) LIKE '%CUX_28_PO_LIMS_HIS_T%'
  AND dhss.snap_id = dhs.snap_id
  AND dhss.instance_Number = dhs.instance_number
  AND dhss.sql_id = dhst.sql_id;

SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'),
       dhss.instance_number,
       dhss.sql_id,
       executions_delta,
       rows_processed_delta
FROM dba_hist_sqlstat dhss,
     dba_hist_snapshot dhs,
     dba_hist_sqltext dhst
WHERE UPPER(dhst.sql_text) LIKE '%CUX_28_PO_LIMS_HIS_T%'
  AND dhss.snap_id = dhs.snap_id
  AND dhss.instance_Number = dhs.instance_number
  AND dhss.sql_id = dhst.sql_id;

六、从ASH相关视图找到执行这些SQL的session、module和machine
select * from dba_hist_active_sess_history WHERE sql_id = '79kny7qpjdau8';
select * from v$active_session_history where sql_Id = '79kny7qpjdau8';

select * from dba_hist_active_sess_history WHERE sql_id = 'gyt4tdnyzwpdn';
select * from v$active_session_history where sql_Id = 'gyt4tdnyzwpdn';

79kny7qpjdau8、gyt4tdnyzwpdn是SQL_ID,替换第二步查询的结果SQL_ID列

七、排查问题SQL
通过第四步,我们确定了导致产生大量redo日志主要涉及CUX_28_PO_LIMS_HIS_T等三张表,再通过第五步确定了每张表排名前五的SQL。针对这些产生大量Redo日志的SQL,就是需要做优化的地方。

赞(0) 打赏
未经允许不得转载:徐万新之路 » Oracle导致Redo日志暴增的SQL语句排查

支持快讯、专题、百度收录推送、人机验证、多级分类筛选器,适用于垂直站点、科技博客、个人站,扁平化设计、简洁白色、超多功能配置、会员中心、直达链接、文章图片弹窗、自动缩略图等...

联系我们

觉得文章有用就打赏一下文章作者

非常感谢你的打赏,我们将继续提供更多优质内容,让我们一起创建更加美好的网络世界!

支付宝扫一扫

微信扫一扫

登录

找回密码

注册