Thursday, August 1, 2013

Qry Tuning Tips

1) E.g. Table1 has 1000 rows and Table2 has 1 row.
 Select * from table1, table2 will give faster result than Select * from table2, table1

2) If 3 tables are joined, select the intersection table as the driving table. Intersection table is the table that has many tables dependent on it.

3) Joins: Table joins should be written first before any condition in the Where clause. The condition that filters the max records should be at the end below the joins. Parsing is done from BOTTOM to TOP.

4) Avoid using Select * from

5) Use DECODE to improve performance. (DECODE is like COALESCE)

6) Count(*) is faster than Count(1). Count(pkey) is the fastest though.

7) Restrict the records with WHERE clause instead of using HAVING clause.

8) Minimize Table lookup in Query:
 e.g. Select *  from tab1
 where col1 = (select col1 from tab2 where colx = 3)
 and col2 = (select col2 from tab2 where colx = 4)
 an efficient way to this is:
 Select * from tab1
 where (col1, col2) = (select col1, col2 from tab2 where col2 = 3)
 The same approach can be used for updates.

9) Use Exists clause instead of In clause and Not Exists instead of Not In clause

10) Use Exists in place of Distinct clause
 E.g. Select Distinct a.col1, a.col2 From tab1 a, tab2 b
 Where a.col3 = b.col3
 Instead the query can be written as:
 Select a.col1, a.col2
 From tab1 a
 Where Exists (select 'X' from tab2 b
 Where a.col3 = b.col3)

 11) Use Explain Plan to get the query execution process.

 12) Use Indexes for faster retrieval of data.

 13) Index will be used if the query has the column on which the index is created. If the columns that are not present in the index are selected, the index is not used.

 14) Avoid use of UNION clause as far as possible

 15) Avoid Is Null and Is Not Null on indexed columns

 16) Using Hints helps in performance improvement

 17) Avoid typecasting of indexed columns

 18) Not, !=, <, || will disable use of Indexes

 19) Arithmetic operations in the Where clause will disable indexes

 20) Use OR clause instead of In clause
 e.g. Select * from tab where col1 in ('a','b')
 instead use: Select * from tab where col1 = 'a' or col1 = 'b'

 21) Avoid unnecessary use of Union, Distinct, minus, intersect, order by and group by

22) DISTINCT - always results in a sort
      UNION - always results in a sort
      UNION ALL - does not sort, but retains any duplicates

23) ORDER BY
may be faster if columns are indexed
use it to guarantee the sequence of the data

24) GROUP BY
specify only columns that need to be grouped
may be faster if the columns are indexed
do not include extra columns in SELECT list or GROUP BY because DB2 must sort the rows

24) Create indexes for columns you frequently:
ORDER BY
GROUP BY (better than a DISTINCT)
SELECT DISTINCT
JOIN

25) When the results of a join must be sorted -
limiting the ORDER BY to columns of a single table can avoid a sort
specifying columns from multiple tables causes a sort

 26) Favor coding explicit INNER and LEFT OUT joins over RIGHT OUTER joins
EXPLAIN converts RIGHT to LEFT join


27)BETWEEN is usually more efficient than <= predicate and the >= predicate . Except when comparing a host variable to 2 columns

28) Avoid the % or the _ at the beginning because it prevents DB2 from using a matching index and may cause a scan. Use the % or the _ at the end to encourage index usage

29) For Subquery - when using negation logic:
Use NOT Exists  (DB2 tests non-existence)
Instead of NOT IN (DB2 must materialize the complete result set)

30) Use EXISTS to test for a condition and get a True or False returned by DB2 and not return any rows to the query

31) After the indexes, place the predicate that will eliminate the greatest number of rows first

32) Hi,

Check this points...It may help U.

1)         Avoid distinct where ever possible. Check whether distinct is
required or not. No distinct when PK or UK are retrieved.

2)         One can consider usage of union where OR condition exits &
eliminate distincts.

3)         Conditions which are likely to fail should be kept first in a set
of conditions separated by AND.

4)         Always use aliases.

5)         Do not involve columns in an expression.
      select * from emp where salary/12 >= 4000;
The query should be:
select * from emp where salary >= 4000 * 12;
i.e. Avoid using Arithmetic within SQL statements.Arithmetic in a SQL
statement will cause DB2 to avoid the use of an index.

6)         Try to avoid usage of in-built or user defined functions.
select * from employee where substr(name,1,1) = 'G';
The query should be:
select * from employee where name = 'G%';

7)         Avoid datatype mismatch since it may lead to implicit/explicit
casting.
select * from emp where sal = '1000';
The query should be:
select * from emp where sal = 1000;

8)         Substitute unnecessary group by & having with where clause.
select avg(salary) as avgsalary, dept from employee group by dept
having dept = 'information systems';
The query should be:
select avg(salary) as avgsalary, dept from employee where dept =
'information systems';






Tuesday, July 9, 2013

Merge Command eg

 MERGE INTO LIQUIDITY.REP_STG_PRODUCT_DATA STG USING
          (Select VALID_TO ,PRODUCT FROM LIQUIDITY.REP_PRODUCT_DATA SCD WHERE SCD.VALID_TO > ? AND SCD.VALID_FROM <=?  ) ad
          on STG.PRODUCT =ad.PRODUCT
 WHEN MATCHED
  THEN
  UPDATE SET (STG.PROCESSED,STG.VALID_TO)=('D',ad.VALID_TO)

  ELSE IGNORE;
  --


MERGE INTO LMFR_REPORT.REF_PRODUCT_DATA  R
 USING
          (SELECT    ms.VALUE2 , fp.SK_FP
          FROM LMFR_REPORT.R_REF_FUNDING_POOL fp
                    inner join LIQUIDITY.LIST_MEMBER_SNAPSHOT ms
                    on ms.LIST_ID = 43 and ms.VALUE1 = fp.SECURITY_POOL and ms.DATA_DATE = '2013-01-31'
) ad
       
                 on ad.SK_FP = R.SK_FP   AND R.VALID_FROM <= '2013-01-31'  AND R.VALID_TO > '2013-01-31'
       AND R.FED_CATEGORY != ms.VALUE2
                  AND ((R.FED_CATEGORY = 'A-II' AND VALUE2='A-I')
                  OR R.FED_CATEGORY IN ('B','D'))
                   AND R.PRODUCT_S_P_LT_RATING IS NULL
                   AND R.PRODUCT_MOODY_LT_RATING IS NULL
                   AND R.PRODUCT_S_P_LT_RATING IS NULL
                   AND R.PRODUCT_HIERARCHY_LEVEL_2 in ( 'DO NOT POST', 'Government')
 WHEN MATCHED
  THEN
  UPDATE SET (R.FED_CATEGORY, R.BASEL_CATEGORY)=(AD.VALUE2, CASE WHEN AD.VALUE2= 'A-I' THEN '0% risk - issued by sovereigns'
WHEN AD.VALUE2='A-II' THEN '20% risk - issued by sovereigns' ELSE 'greater 20% risk - issued by sovereigns' END )
ELSE IGNORE;

Shell Script tips

====================
How to get execution time bw 2 statements in a shell script:
====================
t1=$( perl -e "print time")
sleep 10
t2=$( perl -e "print time")
timeConsumed=$(($t2-t1))

echo "time taken : $timeConsumed"



====================
find and delete files older then 10 days
====================
find subdebt*.dat.Z -mtime +00 -exec ls -lrt  {} \;
find *.* -mtime +10 -exec rm -f {} \;


====================
use of awk command
====================
tabname=ENH_BAAC
rows_deleted=10323244
enh_age=56
echo TABLE_NAME ROWS_DELETED  ENH_AGE | awk '{printf "%-30s %-20s %-20s \n",$1,$2,$3 }' >test
echo $tabname $rows_deleted $enh_age | awk '{printf "%-30s %-20s %-20s \n",$1,$2,$3 }' >>test
tabname=ENH_LETTERS_OF_CREDIT
rows_deleted=0
enh_age=98
echo $tabname $rows_deleted $enh_age | awk '{printf "%-30s %-20s %-20s \n",$1,$2,$3 }' >>test
cat test


====================
Extension of file
====================
ls  /v/region/na/appl/corptsy/lmfr/data/qa/sourced/brm_repo/History/20120730.20120803.174759.dat.Z | read file
echo ${file##*.}

file="thisfile.yogesh.txt"
echo "filename: ${file%.*}"
echo "extension: ${file##*.}"

====================
For loop
====================
for  i in 2 3 4 5 6
do
  export DB2NODE=$i
  db2 terminate
  db3 connect to nypd_lmfr
  db2 load query table liquidity.ETL_UK_MORtGAGES

done

====================
If Else Stmt
====================
PREV_TABNAME=dummy1
if [ "${PREV_TABNAME}" != "dummy" ]; then
    echo $PREV_TABNAME
else 
    echo "incor"
fi
====================
Cut Command: to get 4th col with delimeter as |
====================
cut -d "|" -f-4 test



How to get explain plan

Below qry will give you explain plan for a qry:
Make sure the "Org_SL" file has qry and has semi colon at the end.

db2expln -d DBNAME -f Org_SL -z \; -t -g -o Org_SL.out

Below qry will give you explain plan for a procedure.
db2expln -d DBNAME -package P6472415 -schema SCHEMANAME -g -o PROC.out

How to delete old partitions from Range partition table

Below qry will give you all the partitions older then certain date. this can be used to drop partitions

SELECT TABSCHEMA,TABNAME,DATAPARTITIONNAME,LOWVALUE,DAYNAME(LOWVALUE) DAY_NAME,DAYOFWEEK(LOWVALUE)DAY_OF_WEEK ,
       TRIM(CHAR(DAY(LOWVALUE)))||'_'||monthname(LOWVALUE) MONTH_ENDS
FROM
(
select TABSCHEMA,TABNAME,DATAPARTITIONNAME, DATE(REPLACE(LOWVALUE,'''','')) AS LOWVALUE from syscat.datapartitions
where tabname ='TABNAME'
AND LOWVALUE NOT IN ('MINVALUE','MAXVALUE')
)DT
WHERE LOWVALUE < DATE(date('2012-08-01') - 90 days)
AND DAYOFWEEK(LOWVALUE) != 6
with ur;

-- Qry to drop parition leaving weekend and monthend.
select
        'db2ts detach table partition -dbname NYPD_LMFR  -schema '||dp.TABSCHEMA||' -name '||TABNAME||' -partition '||DATAPARTITIONNAME||' -cleanup drop -cmrs batch -mailto lmfr_dba'
from
    syscat.datapartitions  dp

where
    dp.HIGHVALUE != 'MAXVALUE'
    AND TABNAME='HUB_PENDING_PART'
    AND  date(REPLACE(HIGHVALUE,'''',''))  < current date -3 months
--Month end
AND (date(REPLACE(HIGHVALUE,'''','')) != date(REPLACE(HIGHVALUE,'''','')) + 1 days - day(date(REPLACE(HIGHVALUE,'''',''))) days
--Weekend
 OR DAYNAME(date(REPLACE(HIGHVALUE,'''','')) )!='Friday')
with ur    ;

Monday, April 29, 2013

How can I delete x number of rows



How can I delete x number of rows?
delete from (select 1 from howardg.prod_bal fetch first 10 rows only)
How can I delete x number of rows in a loop?

CREATE  PROCEDURE DELETE_IN_LOOP()
  LANGUAGE SQL
  BEGIN
    DECLARE at_end INTEGER DEFAULT 0;
    DECLARE not_found CONDITION FOR SQLSTATE '02000';
    DECLARE CONTINUE HANDLER FOR not_found SET at_end = 1;
 
    REPEAT         
      delete from (select * from TABSCHEMA.TABNAME_TB a where a.CIL1 = 'Y' and a.DATE = '2012-09-04' fetch first 1000000 rows only );
    UNTIL ( at_end = 1)
    END REPEAT;
  END

The goal is to avoid log file full. If using table scan, the SP may scan the table 5 times of the original. May consider creating index before the delete statement.
How can I delete x number of rows in a loop for any table?
CREATE PROCEDURE DELETE_WITH_COND(IN tbname varchar(100), IN vpred varchar(200))
    MODIFIES SQL DATA
    NOT DETERMINISTIC
    NULL CALL
    LANGUAGE SQL
BEGIN
DECLARE not_found CONDITION FOR SQLSTATE '02000';
DECLARE at_end INTEGER DEFAULT 0;
DECLARE txt VARCHAR(2000);
DECLARE stmt STATEMENT;
DECLARE CONTINUE HANDLER FOR not_found SET at_end = 1;

SET txt = 'delete from (select * from ' || tbname || ' where ' || vpred || ' fetch first 1000000 rows only)';
PREPARE stmt FROM txt;

    REPEAT
        EXECUTE stmt;
    UNTIL ( at_end = 1)
    END REPEAT;
END

Query Tuning Tips



1) E.g. Table1 has 1000 rows and Table2 has 1 row.
 Select * from table1, table2 will give faster result than Select * from table2, table1
2) If 3 tables are joined, select the intersection table as the driving table. Intersection table is the table that has many tables dependent on it.
3) Joins: Table joins should be written first before any condition in the Where clause. The condition that filters the max records should be at the end below the joins. Parsing is done from BOTTOM to TOP.
4) Avoid using Select * from
5) Use DECODE to improve performance. (DECODE is like COALESCE)
6) Count(*) is faster than Count(1). Count(pkey) is the fastest though.
7) Restrict the records with WHERE clause instead of using HAVING clause.
8) Minimize Table lookup in Query:
 e.g. Select *  from tab1
 where col1 = (select col1 from tab2 where colx = 3)
 and col2 = (select col2 from tab2 where colx = 4)
 an efficient way to this is:
 Select * from tab1
 where (col1, col2) = (select col1, col2 from tab2 where col2 = 3)
 The same approach can be used for updates.
9) Use Exists clause instead of In clause and Not Exists instead of Not In clause
10) Use Exists in place of Distinct clause
 E.g. Select Distinct a.col1, a.col2 From tab1 a, tab2 b
 Where a.col3 = b.col3
 Instead the query can be written as:
 Select a.col1, a.col2
 From tab1 a
 Where Exists (select 'X' from tab2 b
 Where a.col3 = b.col3)
 11) Use Explain Plan to get the query execution process.
 12) Use Indexes for faster retrieval of data.
 13) Index will be used if the query has the column on which the index is created. If the columns that are not present in the index are selected, the index is not used.
 14) Avoid use of UNION clause as far as possible
 15) Avoid Is Null and Is Not Null on indexed columns
 16) Using Hints helps in performance improvement
 17) Avoid typecasting of indexed columns
 18) Not, !=, <, || will disable use of Indexes
 19) Arithmetic operations in the Where clause will disable indexes
 20) Use OR clause instead of In clause
 e.g. Select * from tab where col1 in ('a','b')
 instead use: Select * from tab where col1 = 'a' or col1 = 'b'
 21) Avoid unnecessary use of Union, Distinct, minus, intersect, order by and group by

22) DISTINCT - always results in a sort
      UNION - always results in a sort
      UNION ALL - does not sort, but retains any duplicates

23) ORDER BY
may be faster if columns are indexed
use it to guarantee the sequence of the data

24) GROUP BY
specify only columns that need to be grouped
may be faster if the columns are indexed
do not include extra columns in SELECT list or GROUP BY because DB2 must sort the rows

24) Create indexes for columns you frequently:
ORDER BY
GROUP BY (better than a DISTINCT)
SELECT DISTINCT
JOIN

25) When the results of a join must be sorted -
limiting the ORDER BY to columns of a single table can avoid a sort
specifying columns from multiple tables causes a sort

 26) Favor coding explicit INNER and LEFT OUT joins over RIGHT OUTER joins
EXPLAIN converts RIGHT to LEFT join


27)BETWEEN is usually more efficient than <= predicate and the >= predicate . Except when comparing a host variable to 2 columns

28) Avoid the % or the _ at the beginning because it prevents DB2 from using a matching index and may cause a scan. Use the % or the _ at the end to encourage index usage

29) For Subquery - when using negation logic:
Use NOT Exists  (DB2 tests non-existence)
Instead of NOT IN (DB2 must materialize the complete result set)

30) Use EXISTS to test for a condition and get a True or False returned by DB2 and not return any rows to the query

31) After the indexes, place the predicate that will eliminate the greatest number of rows first

32) Hi,

Check this points...It may help U.

1)         Avoid distinct where ever possible. Check whether distinct is
required or not. No distinct when PK or UK are retrieved.

2)         One can consider usage of union where OR condition exits &
eliminate distincts.

3)         Conditions which are likely to fail should be kept first in a set
of conditions separated by AND.

4)         Always use aliases.

5)         Do not involve columns in an expression.
      select * from emp where salary/12 >= 4000;
The query should be:
select * from emp where salary >= 4000 * 12;
i.e. Avoid using Arithmetic within SQL statements.Arithmetic in a SQL
statement will cause DB2 to avoid the use of an index.

6)         Try to avoid usage of in-built or user defined functions.
select * from employee where substr(name,1,1) = 'G';
The query should be:
select * from employee where name = 'G%';

7)         Avoid datatype mismatch since it may lead to implicit/explicit
casting.
select * from emp where sal = '1000';
The query should be:
select * from emp where sal = 1000;

8)         Substitute unnecessary group by & having with where clause.
select avg(salary) as avgsalary, dept from employee group by dept
having dept = 'information systems';
The query should be:
select avg(salary) as avgsalary, dept from employee where dept =
'information systems';
9)Saving CPU in your multiple counts SQL statements


When requiring multiple counts you can choose to write multiple SQL statements (or write a program) such as:

SELECT COUNT(*) AS UNDER_40K WHERE SALARY < 40000

AND

SELECT COUNT(*) AS ABOVE_40K WHERE SALARY >= 40000

Or you can choose to simulate these multiple counts in one SQL using a combination of SUM and CASE in one pass of the data. Note SUM has been used here due to prior limitations with COUNT.

SELECT SUM(CASE WHEN SALARY < 40000
THEN 1 ELSE 0 END) AS UNDER_40K
,SUM(CASE WHEN SALARY >= 40000
THEN 1 ELSE 0 END) AS ABOVE_40K
FROM DSN8710.EMP

The theory is by tagging the row with a 1 if the WHEN condition is true, the SUM will add all of the 1’s, resulting in a final count of the number of rows matching the WHEN condition. A failure of the WHEN clause will result in the row being tagged with a zero or a NULL (my example defaults to zero). Neither will impact the final count.

ELSE 0 vs ELSE NULL

Click More to find out how you can save CPU!




To save CPU when running these queries you may want to consider changing the ELSE 0 to ELSE NULL (or leave out the ELSE, which defaults to NULL).

SELECT SUM(CASE WHEN SALARY < 40000
THEN 1 ELSE NULL END) AS UNDER_40K,
SUM(CASE WHEN SALARY >= 40000 THEN 1 ELSE NULL END) AS ABOVE_40K
FROM DSN8710.EMP

Since column functions ignore NULL, by specifying ELSE NULL for the false conditions, DB2 will not attempt to SUM these rows. Whereas specifying ELSE 0, DB2 will attempt to add the zero to the final SUM.

Specifying ELSE NULL can save you significant CPU if many conditions are optional.

Using COUNT instead of SUM

To take the CPU improvement a step further, DB2 V7 for z/OS expands the capabilities of the COUNT function to support expressions. This functionality is already there for DB2 UDB on other platforms.

The COUNT function can therefore be applied to the multiple counts problem, and is more logical than using SUM to simulate counts.

SELECT COUNT(CASE WHEN SALARY < 40000
THEN ‘’ ELSE NULL END) AS UNDER_40K
,COUNT(CASE WHEN SALARY >= 40000
THEN ‘’ ELSE NULL END) AS ABOVE_40K
FROM DSN8710.EMP

When using the COUNT function, the ELSE condition must assign NULLs, since COUNT will count the occurrences of non-NULL values, whereas SUM would sum up the one’s. It is therefore not important what is assigned for the THEN condition within the COUNT. The example above uses the empty string ‘’ because it requires less internal storage than the integer value 1.

In a sample test, usage of this COUNT syntax outperformed the SUM (with ELSE NULL) by approximately 4%
 




Monday, April 22, 2013

How to create a incremental replicated MQT

How to create a incremental replicated MQT :




There are 2 ways, If you are inserting into you table then follow steps 1 .if u have load then follow setp 2

STEP1:

CREATE TABLE SCHEMA.SAMPLE LIKE SCHEMA.SAMPLE ;

CREATE UNIQUE INDEX "SCHEMA"."XUIt_30" ON "SCHEMA"."SAMPLE"
                ("RUN_ID" ASC,
                 "TABLE_ID" ASC,
                 "BUSINESS_DATE" ASC,
                 "ROW_ID" ASC)
                CLUSTER ALLOW REVERSE SCANS;
-- DDL Statements for primary key on Table "SCHEMA"."SAMPLE"

ALTER TABLE "SCHEMA"."SAMPLE"
        ADD CONSTRAINT "XUIHUBt_30" PRIMARY KEY
                ("RUN_ID",
                 "TABLE_ID",
                 "BUSINESS_DATE",
                 "ROW_ID");
DELETE FROM SCHEMA.SAMPLE

INSERT INTO SCHEMA.SAMPLE SELECT * FROM SCHEMA.SAMPLE FETCH FIRST 1000 ROWS ONLY;

CREATE SUMMARY TABLE SCHEMA.R_SAMPLE AS (SELECT *
FROM SCHEMA.SAMPLE ) DATA INITIALLY DEFERRED REFRESH DEFERRED;

REFRESH TABLE SCHEMA.R_SAMPLE ;

CREATE TABLE  SCHEMA.STG_SAMPLE FOR  SCHEMA.R_SAMPLE PROPAGATE IMMEDIATE;
SET INTEGRITY FOR SCHEMA.STG_SAMPLE IMMEDIATE CHECKED;

REFRESH TABLE SCHEMA.R_SAMPLE NOT INCREMENTAL;

SELECT COUNT(1) FROM SCHEMA.R_SAMPLE;

INSERT INTO SCHEMA.SAMPLE  SELECT a.* FROM SCHEMA.SAMPLE a WHERE (RUN_ID,BUSINESS_DATE,ROW_ID)
NOT IN (SELECT RUN_ID,BUSINESS_DATE,ROW_ID FROM SCHEMA.SAMPLE b );

SELECT COUNT(1) FROM SCHEMA.STG_SAMPLE;
SELECT COUNT(1) FROM SCHEMA.R_SAMPLE;

REFRESH TABLE SCHEMA.R_SAMPLE INCREMENTAL;

SELECT COUNT(1) FROM SCHEMA.R_SAMPLE;
SELECT COUNT(1) FROM SCHEMA.STG_SAMPLE;





STEP 2:

db2 "export to SCHEMA.SAMPLE of del select * FROM SCHEMA.SAMPLE"

CREATE TABLE SCHEMA.SAMPLE LIKE SCHEMA.HUB_TAPS_CASH ;

CREATE UNIQUE INDEX "SCHEMA"."XUIHUBt_30" ON "SCHEMA"."SAMPLE"
                ("RUN_ID" ASC,
                 "TABLE_ID" ASC,
                 "BUSINESS_DATE" ASC,
                 "ROW_ID" ASC)
                CLUSTER ALLOW REVERSE SCANS;
-- DDL Statements for primary key on Table "SCHEMA"."SAMPLE"

ALTER TABLE "SCHEMA"."SAMPLE"
        ADD CONSTRAINT "XUIHUBt_30" PRIMARY KEY
                ("RUN_ID",
                 "TABLE_ID",
                 "BUSINESS_DATE",
                 "ROW_ID");

select count(1) from SCHEMA.SAMPLE;

db2 " load client from /user/SCHEMA.SAMPLE of del  rowcount 1000 INSERT INTO SCHEMA.SAMPLE "

CREATE SUMMARY TABLE SCHEMA.R_SAMPLE AS (SELECT *
FROM SCHEMA.SAMPLE ) DATA INITIALLY DEFERRED REFRESH DEFERRED;

REFRESH TABLE SCHEMA.R_SAMPLE ;

CREATE TABLE  SCHEMA.STG_SAMPLE FOR  SCHEMA.R_SAMPLE PROPAGATE IMMEDIATE;
SET INTEGRITY FOR SCHEMA.STG_SAMPLE IMMEDIATE CHECKED;

REFRESH TABLE SCHEMA.R_SAMPLE NOT INCREMENTAL;

SELECT COUNT(1) FROM SCHEMA.R_SAMPLE;

db2 " load client from /user/SCHEMA.HUB_TAPS_CASH of del   INSERT INTO SCHEMA.SAMPLE "
SET INTEGRITY FOR SCHEMA.SAMPLE IMMEDIATE CHECKED;


----
SET INTEGRITY FOR SCHEMA.STG_SAMPLE IMMEDIATE CHECKED;
REFRESH TABLE SCHEMA.R_SAMPLE INCREMENTAL;

SELECT COUNT(1) FROM SCHEMA.R_SAMPLE;
SELECT COUNT(1) FROM SCHEMA.STG_SAMPLE;