2021/10/23

APEX at Always Free Oracle Cloud Tier with Social Sign-on

 There are plenty of articles on the Internet, which describe how to make APEX social. I tried this one for my APEX at an Always Free ATP Database. Of course, I used my domain name to access my APEX application. Generally, it worked, but it somehow always redirected to those cryptic Oracle Cloud URLs away from my pretty domain application URL.

I decided to find out how I can have my domain name within my APEX app and Social Sign-On simultaneously at Oracle Free Tier. I have to say, that perhaps Oracle Cloud Vanity URL feature could work here, but at the moment it is technically available for paid tier only because private database endpoints are available only for paid ADBs.

So, the configuration I am going to describe is the following (all at Free Tier):

  • - Autonomous Database 19c/21c
  • - APEX 21.1.3 (current for Shared Autonomous DBs)
  • - VM with Linux 7.9 Micro 
  • - Customer-managed ORDS 21.3
  • - Load Balancer

Assuming registered domain name and Google account are already available.

I base my instruction here on two articles Running ORDS Against the Autonomous Database and Running ORDS Against the Autonomous Database - Part 2. Those almost worked for me. Almost perhaps because some details need to be changed due to many versions of the involved components passed since the publication.

Autonomous Database Setup

Assuming the database is already up and running, the only thing to do is

-- 1. Create the database user.
create user ords_public_user2 identified by "your_secure_password";
-- 2. Allow the user to connect to the database.
grant connect to ords_public_user2;
-- 3. Perform some magic.
begin
    ords_admin.provision_runtime_role(
        p_user => 'ORDS_PUBLIC_USER2'
        , p_proxy_enabled_schemas => true
    );
end;

VM Setup

Some details about Linux VM creation: I used VM.Standard.E2.1.Micro shape with Oracle-Linux-7.9-2021.10.04-0 image. VM has to reside in a subnet where Load Balancer will be deployed.

Here are some rules for the Default Security List of my subnet I found crucial for all this works properly.



ORDS Setup

The next steps are slightly modified steps from Running ORDS Against the Autonomous.

1. Copy APEX distributive and your Autonomous DB Wallet to the VM

scp -i <YOUR_PUB_KEY> /u01/Downloads/apex_21.1.zip opc@<VM_PUBLIC_IP>:/tmp
scp -i <YOUR_PUB_KEY> /u01/Downloads/Wallet_<YOUR_DB>.zip opc@<VM_PUBLIC_IP>:/tmp

2. Get to the VM through SSH and execute commands:

- install ORDS

sudo yum-config-manager --enable ol7_oci_included
sudo yum install -y ords

- configure firewall

sudo firewall-cmd --zone=public --add-port 8080/tcp
sudo firewall-cmd --zone=public --add-port 8080/tcp --permanent
sudo firewall-cmd --zone=public --add-port 443/tcp
sudo firewall-cmd --zone=public --add-port 443/tcp --permanent

- set configuration forlder for ORDS. Following commands are executed under oracle.

sudo su - oracle
export JAVA_OPTIONS=-Xmx512M
ORDS_CONFIG_DIR=/opt/oracle/ords/config && mkdir -p $ORDS_CONFIG_DIR/ords/conf
ords configdir $ORDS_CONFIG_DIR

- create ORDS configuration files

ORDS_USER=ORDS_PUBLIC_USER2
ORDS_PASSWORD=your_secure_password
SERVICE_NAME=dbopas4_tp
WALLET_BASE64=`base64 -w 0 /tmp/Wallet_DBOPAS4.zip`

cat << EOF > $ORDS_CONFIG_DIR/ords/conf/apex_pu.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
  <entry key="db.username">$ORDS_USER</entry>
  <entry key="db.password">!$ORDS_PASSWORD</entry>
  <entry key="db.wallet.zip.service">$SERVICE_NAME</entry>
  <entry key="db.wallet.zip"><![CDATA[$WALLET_BASE64]]></entry>
</properties>
EOF

cat << EOF > $ORDS_CONFIG_DIR/ords/defaults.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
  <entry key="plsql.gateway.enabled">true</entry>
  <entry key="jdbc.InitialLimit">5</entry>
  <entry key="jdbc.MaxLimit">10</entry>
</properties>
EOF

- deploy and configure APEX static files. An open question here still remains. Since Cloud APEX has patched version (higher than that available for download), it has newer APEX static files than those in the APEX installation archive (more details at APEX Download page). Fresher files are available at CDN. So far I do not know how to update static files in my configuration or how to make use of CDN here.

APEX_FILE=/tmp/apex_21.1.zip
RELEASE=21.1.3
mkdir -p /opt/oracle/apex/images/$RELEASE
unzip $APEX_FILE -d /tmp/
cp -R /tmp/apex/images/* /opt/oracle/apex/images/$RELEASE/
rm -rf /tmp/apex
mkdir -p /opt/oracle/ords/config/ords/standalone/
cat << EOF > /opt/oracle/ords/config/ords/standalone/standalone.properties
standalone.static.context.path=/i
standalone.static.path=/opt/oracle/apex/images
EOF
mkdir -p /opt/oracle/ords/config/ords/standalone/doc_root

- edit ords.conf to add Java heap size configuration

vi /etc/ords/ords.conf

add the following row

JAVA_OPTIONS=-Xmx512M

- starting ORDS for the first time

ords standalone

when prompted "Enter 1 if using HTTP or 2 if using HTTPS [1]:" choose 1
when prompted "Enter the HTTP port [8080]:" choose 8080
After ORDS is started, press Ctrl-C to exit

- get back to OPC user and enable ORDS for autostart

sudo systemctl enable ords

- reboot the VM

- after some time the VM is available check ORDS status and log

sudo systemctl status ords
tail -f /var/log/ords/ords.log

Load Balancer Setup

Basically steps from Running ORDS Against the Autonomous Database - Part 2 can be followed. Although, Cloud UI is quite different these days.

- when prompted "Select Load Balancer Type" choose "Load balancer".

- "Add details" screen - all defaults, choose VCN and Subnet where your VM resides.

- "Choose backends" - all defaults.

- "Configure Listeners" - all defaults. Add your SSL certificates created for your application domain name

- "Manage Logging" - up to you.

After the Load balancer is created, go to its detail page "Resources" section at the bottom left of the page.

- go to "Backend Sets", at the right side of the row of the created Backend Set press "..." and choose "Update Helath Check" and set

Port: 8080
URL Path (URI): /i/21.1.3/apex_version.txt

- click at the name of the Backend Set, click on "Backends" at Resource section.

- create a backend: specify your VM private IP and port 8080

- go back to Load Balancer Details, go to Hostnames, add your application domain URL there

- go back to Load Balancer Details, go to Listeners, add one more listener for HTTP protocol and port 8080. Make sure both listeners are bound to Backend Set and Hostname you specified earlier.

- go back to Load Balancer Details, go to Path Routes. Add to routes like the following

If URL matches (Force Longest Prefix Match) /ords, direct traffic to backend set <YOUR_BACKEND_SET>
If URL matches (Force Longest Prefix Match) /i, direct traffic to backend set <YOUR_BACKEND_SET>

At this point, Load Balancer status has to be green "Ok"

Take Load Balancer's public IP and bind it to your domain name. Wait until your domain name resolves to the Load Balancer public IP.

APEX application with Social Sign-on

In a meantime create and configure an APEX application using this article.

Use "https://<YOUR_DOMAIN_NAME>/ords/apex_authentication.callback" for "Authorized redirect URIs" in Google "Credential" configuration.

Once everything is ready, open your APEX application using your domain name, it has to direct you to Google authentication and eventually to your app with your domain name.

Update:

Just this to set up proper static files

begin 
        apex_instance_admin.set_parameter(
            p_parameter => 'IMAGE_PREFIX',
            p_value     => 'https://static.oracle.com/cdn/apex/21.1.2/' );
        
        commit;
end;
/

2021/03/24

My first ML model

 


2017/12/21

An ORA-07445 troubleshooting case

When somebody adopts fresh Oracle releases, she or he will certainly hit some fresh bug, which no one has registered or described before.
And there is no choice except trying to dig deeply into available trace information to get some clue of how to deal with this situation. I mean a dev team is not going to wait until SR will be resolved in some indefinite future.

So, it was a huge query with more than 5 page downs, and it failed to work. MOS search gave nothing for ORA-07445 first argument as well as Google search.
But still, it is not the time for giving up. Oracle kernel error stack actually can provide a lot of interesting stuff.

Here is an excerpt from ORA-07445 incident trace file:

----- Call Stack Trace -----
calling              call     entry                
location             type     point                
-------------------- -------- -------------------- 
skdstdst()+45        call     kgdsdst()            
                                                   
                                                   
...
__sighandler()       call     sslsshandler()       
                                                   
                                                   
                                                   
kkoordi()+670        signal   __sighandler()       
                                                   
                                                   
kkocnp()+279         call     kkoordi()            
                                                   
                                                   
kkooqb()+1882        call     kkocnp()             
                                                   
                                                   
kkoqbc()+2438        call     kkooqb()             
                                                   
                                                   
apakkoqb()+182       call     kkoqbc()             
                                                   
                                                   
apaqbdDescendents()  call     apakkoqb()           
+488                                               
                                                   
apaqbd()+135         call     apaqbdDescendents()  
                                                   
                                                   
kkqcbyGetCost()+749  call     apaqbd()             
                                                   
                                                   
kkqcbydrvPreUA()+29  call     kkqcbyGetCost()      
89                                                 
                                                   
qksqbApplyToQbcLoc(  call     kkqcbydrvPreUA()     
)+635                                              
                                                   
qksqbApplyToQbcLoc(  call     qksqbApplyToQbcLoc(  
)+987                         )                    
...

A function kkoordi() is that which failed. Going through the stack, some human-readable names can be found. And that is what can help to get in the right way. Just two names gave a couple of idea of what to do next.
kkqcbyGetCost() - is certainly about Cost-Based Optimizer. And it means that the error must be reproducible by EXPLAIN PLAN statement, which was easily confirmed.
qksqbApplyToQbcLoc() - seems to apply something to a query block, that block which caused the error! So, turning on 10053 trace event and executing EXPLAIN PLAN one more time provides the second major piece of information. A tail of 10053 trace-file showed the query block (subquery) which caused the error while costing its different join orders.

So simple in this case! And it provided all necessary information for developers to rewrite just one subquery and eliminate the issue.


2017/09/18

APPEND hint placement

It is pretty surprising,  that APPEND hint can be placed not only after INSERT keyword but also after SELECT keyword of a subquery.

Check these execution plans:

SQL> create table tab1 as select * from dba_objects;

Table TAB1 created.

SQL> create table tab2 as select * from tab1 where 1=2;

Table TAB2 created.

SQL> explain plan for
  2* insert /*+ append */ into tab2 select * from tab1;
Explained.

SQL> @geteplan

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 888952673

----------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Cost (%CPU)|
----------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |      | 98784 |   350   (1)|
|   1 |  LOAD AS SELECT                  | TAB2 |       |            |
|   2 |   OPTIMIZER STATISTICS GATHERING |      | 98784 |   350   (1)|
|   3 |    TABLE ACCESS FULL             | TAB1 | 98784 |   350   (1)|
----------------------------------------------------------------------

SQL> explain plan for
  2* insert into tab2 select * from tab1;
Explained.

SQL> @geteplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------
Plan hash value: 2211052296

--------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Cost (%CPU)|
--------------------------------------------------------------
|   0 | INSERT STATEMENT         |      | 98784 |   350   (1)|
|   1 |  LOAD TABLE CONVENTIONAL | TAB2 |       |            |
|   2 |   TABLE ACCESS FULL      | TAB1 | 98784 |   350   (1)|
--------------------------------------------------------------

SQL> explain plan for
  2* insert into tab2 select /*+ append */ * from tab1;
Explained.

SQL> @geteplan

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Plan hash value: 888952673

----------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Cost (%CPU)|
----------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |      | 98784 |   350   (1)|
|   1 |  LOAD AS SELECT                  | TAB2 |       |            |
|   2 |   OPTIMIZER STATISTICS GATHERING |      | 98784 |   350   (1)|
|   3 |    TABLE ACCESS FULL             | TAB1 | 98784 |   350   (1)|
----------------------------------------------------------------------

And this fact is documented in a quite unusual place. See 20.4.2.3.1 Serial Mode Inserts with SQL Statements of Admin Guide for more details.

2017/08/30

Bind variable peeking. Part 2.

As Jonathan Lewis wrote: "bind variable peeking is always done, even if histograms are not generated", so be prepared, your plan can be changed without any warning.

Let me show the following example:


create table tab1 as select * from all_objects;
create unique index tab1_obj_id on tab1(object_id);
create index tab1_date_obj_id on tab1(DATA_OBJECT_ID,object_name,object_id);
exec dbms_stats.gather_table_stats(user, 'TAB1', cascade=> true, method_opt => 'FOR ALL COLUMNS SIZE 1');


It is obvious that if a query has filter object_id=:bind, the first index tab1_obj_id has to be chosen by CBO.


explain plan for
select count(1) from tab1 where object_id=:a1 and DATA_OBJECT_ID=:a2;

Plan hash value: 17355806

--------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |     1 |     7 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |             |     1 |     7 |            |          |
|*  2 |   TABLE ACCESS BY INDEX ROWID| TAB1        |     1 |     7 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | TAB1_OBJ_ID |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("DATA_OBJECT_ID"=TO_NUMBER(:A2))

   3 - access("OBJECT_ID"=TO_NUMBER(:A1))

But bind peeking can lead to some unexpected result. An unlucky bind value, which is NULL creates lots of confusion and also sometimes very inefficient plan.

variable a1 number
variable a2 number

begin
  :a1:=29;
  :a2:=null;
end;
/

select /*2*/ count(1) from tab1 where object_id=:a1 and DATA_OBJECT_ID=:a2;
select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));

SQL_ID  25bxjz9jgv53y, child number 0
-------------------------------------
select /*2*/ count(1) from tab1 where object_id=:a1 and
DATA_OBJECT_ID=:a2

Plan hash value: 683818241

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |      1 |        |      1 |00:00:00.01 |
|   1 |  SORT AGGREGATE   |                  |      1 |      1 |      1 |00:00:00.01 |
|*  2 |   INDEX RANGE SCAN| TAB1_DATA_OBJ_ID |      1 |      1 |      0 |00:00:00.01 |
--------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (NUMBER): 29
   2 - (NUMBER): (null)

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DATA_OBJECT_ID"=:A2 AND "OBJECT_ID"=:A1)
       filter("OBJECT_ID"=:A1)

One more execution with all non-null binds just reuses already parsed plan. With real-world data, it might get stuck forever.

begin
  :a1:=29;
  :a2:=29;
end;
/

select /*2*/ count(1) from tab1 where object_id=:a1 and DATA_OBJECT_ID=:a2;
select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));

SQL_ID  25bxjz9jgv53y, child number 0
-------------------------------------
select /*2*/ count(1) from tab1 where object_id=:a1 and
DATA_OBJECT_ID=:a2

Plan hash value: 683818241

----------------------------------------------------------------------------------------------
| Id | Operation         | Name             | Starts | E-Rows | A-Rows |   A-Time  | Buffers |
----------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT  |                  |      1 |        |      1 |00:00:00.01|       3 |
|  1 |  SORT AGGREGATE   |                  |      1 |      1 |      1 |00:00:00.01|       3 |
|* 2 |   INDEX RANGE SCAN| TAB1_DATA_OBJ_ID |      1 |      1 |      1 |00:00:00.01|       3 |
----------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (NUMBER): 29
   2 - (NUMBER): (null)

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DATA_OBJECT_ID"=:A2 AND "OBJECT_ID"=:A1)
       filter("OBJECT_ID"=:A1)

A little variation of this case, OBJECT_TYPE VARCHAR2 is used instead of DATA_OBJECT_ID NUMBER:

create table tab1 as select * from all_objects;
create unique index tab1_obj_id on tab1(object_id);
create index tab1_type_obj_id on tab1(object_type,object_name,object_id);
exec dbms_stats.gather_table_stats(user, 'TAB1', cascade=> true, method_opt => 'FOR ALL COLUMNS SIZE 1'

Unlucky bind value:
begin
  :a1:=29;
  :a2:=null;
end;
/

select /*2*/ count(1) from tab1 where object_id=:a1 and object_type=:a2;
select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));

SQL_ID  6buxk2515azrt, child number 0
-------------------------------------
select /*2*/ count(1) from tab1 where object_id=:a1 and object_type=:a2

Plan hash value: 3906737377

--------------------------------------------------------------------------------------
| Id  | Operation         | Name             | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                  |      1 |        |      1 |00:00:00.01 |
|   1 |  SORT AGGREGATE   |                  |      1 |      1 |      1 |00:00:00.01 |
|*  2 |   INDEX RANGE SCAN| TAB1_TYPE_OBJ_ID |      1 |      1 |      0 |00:00:00.01 |
--------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (NUMBER): 29
   2 - (VARCHAR2(30), CSID=873): (null)

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_TYPE"=:A2 AND "OBJECT_ID"=:A1)
       filter("OBJECT_ID"=:A1)

But when it is time to lucky one, the picture is much better:

begin
  :a1:=29;
  :a2:='CLUSTER';
end;
/

select /*2*/ count(1) from tab1 where object_id=:a1 and object_type=:a2;
select * from table(dbms_xplan.display_cursor(null, null, 'LAST ALLSTATS +peeked_binds'));

SQL_ID  6buxk2515azrt, child number 1
-------------------------------------
select /*2*/ count(1) from tab1 where object_id=:a1 and object_type=:a2

Plan hash value: 17355806

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Starts | E-Rows | A-Rows | Buffers |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |      1 |        |      1 |       3 |
|   1 |  SORT AGGREGATE              |             |      1 |      1 |      1 |       3 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| TAB1        |      1 |      1 |      1 |       3 |
|*  3 |    INDEX UNIQUE SCAN         | TAB1_OBJ_ID |      1 |      1 |      1 |       2 |
-----------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (NUMBER): 29
   2 - (VARCHAR2(30), CSID=873): 'CLUSTER'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OBJECT_TYPE"=:A2)
   3 - access("OBJECT_ID"=:A1)

Please notice, the second execution resulted in creating new child cursor number 1.
Look at this comment, which is about the following claim "part of the identifying characteristic for a child cursor is the memory allocation for the input bind variables and, at some layer in the library code, character columns are allowed to have 4 different allocation sizes, namely 32, 128, 2000, or 4000 bytes". So, zero-size bind seems adds one more allocation size and identifies the first child cursor number 0 as that not suitable for execution with non-zero length bind.

Upd.: one more piece of information is here

2017/08/29

SQLPATH environment variable length limit

I heavily use lots of sql scripts spread across several folders. Recently I started some reorganization of that directories to meet new needs of SQLcl (I'm testing it now in order to put it in my tool set). However some my scripts did not work properly with SQLcl, so I had to return to SQL*Plus for a while. But when I started SQL*Plus, I found that it could find neither login.sql nor any other scripts from my big library anymore.

It appeared almost accidentally that SQL*Plus can not handle SQLPATH variable which is longer than 260 characters. In contrast, SQLcl did not show any problem with the even longer value of SQLPATH.

2017/07/21

Wrong result when hierarchical query uses rownum filter and subquery filter over dblink

Here is a test case to reproduce the issue. It uses well-known EMP and DEPT tables. If you need a script to create and populate it, see sql-script at the end of this article.
Additionally, loopback database link is needed:
create database link dbloop connect to USERNAME identified by PASSWORD using 'localhost:1521/SERVICENAME';

And here is a result:

SQL> select
  2    LPAD(' ',2*(LEVEL-1)) || ename ename, job
  3  from emp
  4  where
  5      deptno not in (select deptno from dept where loc='NEW YORK')
  6  and rownum=1
  7  start with mgr is null
  8  connect by prior empno=mgr;

ENAME           JOB
--------------- ---------
  JONES         MANAGER

SQL> select
  2    LPAD(' ',2*(LEVEL-1)) || ename ename, job
  3  from emp
  4  where
  5      deptno not in (select /*+ driving_site(dept) */ deptno from dept@dbloop where loc='NEW YORK')
  6  and rownum=1
  7  start with mgr is null
  8  connect by prior empno=mgr;

no rows selected

So, the only difference is that in the second query NOT IN subquery uses a table DEPT  behind dblink and DRIVING_SITE hint makes Oracle execute the whole statement remotely.

The execution plan for the second query gives some clue about the root cause of the wrong result:

Plan hash value: 1623810261

--------------------------------------------------------------------
| Id  | Operation              | Name | Cost (%CPU)| Inst   |IN-OUT|
--------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|      |     3  (34)|        |      |
|   1 |  COUNT                 |      |            |        |      |
|*  2 |   FILTER               |      |            |        |      |
|   3 |    REMOTE              | EMP  |            |      ! | R->S |
|*  4 |    TABLE ACCESS FULL   | DEPT |     3   (0)| DB12C~ |      |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter( NOT EXISTS (SELECT 0 FROM "DEPT" "A2" WHERE 
              "A2"."LOC"='NEW YORK' AND LNNVL("A2"."DEPTNO"<>:B1)))
   4 - filter("A2"."LOC"='NEW YORK' AND LNNVL("A2"."DEPTNO"<>:B1))

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "MGR","EMPNO","DEPTNO","JOB","ENAME",PRIOR "EMPNO",LEVEL 
       FROM "EMP" "A7" WHERE ROWNUM=1 START WITH ("MGR" IS NULL) 
  CONNECT BY ("MGR"=PRIOR "EMPNO") (accessing '!' )


Note
-----
   - fully remote statement
   - dynamic statistics used: dynamic sampling (level=2)

It shows that ROWNUM=1 predicate is executed on step 3 and before NOT IN predicate on step 2, which is wrong.

Checked on 11.2.0.3, 12.1.0.2, 12.2.0.1.

P.S. I have not managed to find anything matching on MOS.

=============================================================

create table dept(  
  deptno     number(2,0),  
  dname      varchar2(14),  
  loc        varchar2(13),  
  constraint pk_dept primary key (deptno)  
);
create table emp(  
  empno    number(4,0),  
  ename    varchar2(10),  
  job      varchar2(9),  
  mgr      number(4,0),  
  hiredate date,  
  sal      number(7,2),  
  comm     number(7,2),  
  deptno   number(2,0),  
  constraint pk_emp primary key (empno),  
  constraint fk_deptno foreign key (deptno) references dept (deptno)  
);

insert into DEPT (DEPTNO, DNAME, LOC) values(10, 'ACCOUNTING', 'NEW YORK');
insert into dept  values(20, 'RESEARCH', 'DALLAS');
insert into dept  values(30, 'SALES', 'CHICAGO');
insert into dept  values(40, 'OPERATIONS', 'BOSTON');
insert into emp   values(7839, 'KING', 'PRESIDENT',  null,   to_date('17-11-1981','dd-mm-yyyy'),5000, null, 10  );
insert into emp   values(7698, 'BLAKE', 'MANAGER',   7839,   to_date('1-5-1981','dd-mm-yyyy'),  2850, null, 30  );
insert into emp   values(7782, 'CLARK', 'MANAGER',   7839,   to_date('9-6-1981','dd-mm-yyyy'),  2450, null, 10  );
insert into emp   values(7566, 'JONES', 'MANAGER',   7839,   to_date('2-4-1981','dd-mm-yyyy'),  2975, null, 20  );
insert into emp   values(7788, 'SCOTT', 'ANALYST',   7566,   to_date('30-MAR-87','dd-mm-rr'),   3000, null, 20  );
insert into emp   values(7902, 'FORD', 'ANALYST',    7566,   to_date('3-12-1981','dd-mm-yyyy'), 3000, null, 20  );
insert into emp   values(7369, 'SMITH', 'CLERK',     7902,   to_date('17-12-1980','dd-mm-yyyy'),800,  null, 20  );
insert into emp   values(7499, 'ALLEN', 'SALESMAN',  7698,   to_date('20-2-1981','dd-mm-yyyy'), 1600, 300, 30  );
insert into emp   values(7521, 'WARD', 'SALESMAN',   7698,   to_date('22-2-1981','dd-mm-yyyy'), 1250, 500, 30  );
insert into emp   values(7654, 'MARTIN', 'SALESMAN', 7698,   to_date('28-9-1981','dd-mm-yyyy'), 1250, 1400, 30  );
insert into emp   values(7844, 'TURNER', 'SALESMAN', 7698,   to_date('8-9-1981','dd-mm-yyyy'),  1500, 0, 30  );
insert into emp   values(7876, 'ADAMS', 'CLERK',     7788,   to_date('21-APR-87', 'dd-mm-rr'),  1100, null, 20  );
insert into emp   values(7900, 'JAMES', 'CLERK',     7698,   to_date('3-12-1981','dd-mm-yyyy'), 950,  null, 30  );
insert into emp   values(7934, 'MILLER', 'CLERK',    7782,   to_date('23-1-1982','dd-mm-yyyy'), 1300, null, 10  );
commit;

select
  LPAD(' ',2*(LEVEL-1)) || ename ename, job
from emp
where
    deptno not in (select deptno from dept where loc='NEW YORK')
and rownum=1
start with mgr is null
connect by prior empno=mgr;

select 
  LPAD(' ',2*(LEVEL-1)) || ename ename, job
from emp
where
    deptno not in (select /*+ driving_site(dept) */ deptno from dept@dbloop where loc='NEW YORK')
and rownum=1
start with mgr is null
connect by prior empno=mgr;