Author Archives: Asaf Tal

Weird oracle bug – ORA-02019 when creating MATERIALIZED VIEW

ORA-02019: connection description for remote database not found

TL;DR
When you try to create a materialized view on a different schema and the query selects from more than one remote object you get a misleading error ORA-02019: connection description for remote database not found.
Your options:
change the query to use one objects.
Connect as the other user and create the materialized view
Ugly workaround – create a materialized view for one of the remote objects and use it in the query.

Long version:

Lately, I was trying to create a materialized view. The problem was that the query behind the view was using a sub query. As a result, when I tried to run the CREATE MATERIALIZED VIEW command, I got the famous ORA-22818: subquery expressions not allowed here.

1
2
3
4
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT a.employee_id,a.department_id ,(SELECT department_name FROM hr.departments@orcl d WHERE 
                                       d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

According to Oracle error codes, creating a materialized view containing a sub query is not supported

1
2
3
4
5
ORA-22818: subquery expressions NOT allowed here
22818. 00000 -  "subquery expressions not allowed here"
*Cause:    An attempt was made TO USE a subquery expression WHERE these
           are NOT supported.
*Action:   Rewrite the statement WITHOUT the subquery expression.

In order to Rewrite the statement without the subquery expression I used the plsql guru Steven Feuerstein advice and created a regular view with the complex query and tried to create a materialized on top of the view.

As a side note, this way of building a simple materialized view on top of a regular view is a good practice since it will allow you to change the materialized view without having to drop and recreate it.

The first part of the process went well and the view was created and worked as expected

1
2
3
4
5
CREATE  VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id ,
                                     (SELECT department_name FROM hr.departments@orcl d 
                                      WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

However, once I tried to create the materialized view on the application schema using my private schema, I got the following message

1
2
3
4
5
6
conn U2/U2
 
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT * FROM u1.JOIN_EMP_DEPT_V;
 
ORA-02019: connection description FOR remote DATABASE NOT found

This message is completely misleading. We know that the db link exists and we know it works. After all, the view was created using this db link.

1
2
CREATE OR REPLACE VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id FROM hr.employees@orcl a  ;

One remote object – Works!

In addition, user U1 can create the the materialized view without a problem

1
2
3
conn U1/U1
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT * FROM JOIN_EMP_DEPT_V;

also works!

Apparently, I am not the only one encountering the ORA-02019: connection description for remote database not found issue.

Creating materialized view of another user via private db link succeeded by selecting single table while failed by selecting more than one table (Doc ID 2305974.1)

Sadly, according to Oracle, this is not bug. Basically, they claim that this is the way it works and you can ask for an ENHANCEMENT REQUEST.

If creating the view with only one remote object or connecting as the other user are not an option , one ugly workaround would be to create a materialized view for one of the tables and create the join it with one remote object

1
2
3
4
5
6
7
CREATE MATERIALIZED VIEW u1.JUST_DEPT_MV AS 
SELECT * FROM hr.departments@orcl;
 
 
CREATE OR REPLACE VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id ,(SELECT department_name FROM u1.JUST_DEPT_MV d WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

A short reminder – ORA-19279: XPTY0004 – XQuery dynamic type mismatch: expected singleton sequence – got multi-item sequence

1
2
3
 
ORA-19279: XPTY0004 - XQuery dynamic TYPE mismatch: expected singleton SEQUENCE - got multi-item SEQUENCE
19279. 00000 -  "XQuery dynamic type mismatch: expected singleton sequence - got multi-item sequence"

A short reminder, this message may look intimidating but this is one of the rare cases when oracle codes actually explains the problem well:

*Cause: The XQuery sequence passed in had more than one item.
*Action: Correct the XQuery expression to return a single item sequence.

Basically, it only means that there is more than one instance of the tag you are looking at the level you are looking. Or, in plain English, duplicate nodes.

for example:
(pardon the poor indentation, for some reason the code editor is failing in showing html tags)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
 SELECT ITEMS
FROM
xmltable(
 '*'
PASSING XMLTYPE('<ROWSET>
<ITEMS>
<ITEM>AAA</ITEM>
</ITEMS>
<ITEMS>
<ITEM>BBB</ITEM>
</ITEMS>
</ROWSET>')
COLUMNS
ITEMS varchar2(10) PATH '//*:ITEM'
)
;

ORA-19279: XPTY0004 – XQuery dynamic type mismatch: expected singleton sequence – got multi-item sequence

One solution, is to point the XQuery_string to the correct level. In our example, all you need to do is start our query at the CODES level.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
 SELECT ITEMS
FROM
xmltable(
 '//ITEMS'
PASSING XMLTYPE('<ROWSET>
<ITEMS>
<ITEM>AAA</ITEM>
</ITEMS>
<ITEMS>
<ITEM>BBB</ITEM>
</ITEMS>
</ROWSET>')
COLUMNS
ITEMS varchar2(10) PATH '//*:ITEM'
)
;

CODES
———-
AAA
BBB

Informatica – FATAL ERROR : Signal Received: SIGSEGV (11)

While running a relatively simple Informatica mapping I got the following error.
Severity: FATAL
Message Code:
Message: *********** FATAL ERROR : Caught a fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Aborting the DTM process due to fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Signal Received: SIGSEGV (11) ***********

Basically, errors like this means that something went terribly wrong and the Informatica server did not know how to handle it. In many cases you will need to contact Informatica support to help you with a solution.

Various reason can cause this kind of error:

tnsnames.ora file used by the Oracle Client installed on the PowerCenter server machine gets corrupted.
If this is the case, other workflows using the same tns entry would probably break as well. The solution is to recreate the tnsnames.ora file (or contact Oracle support).

Another cause for the SIGSEGV Fatal error crash is a problem with the DB drivers on the Informatica server. Many user report this issue with Teradata and Oracle

The above reasons would probably be system-wide and will cause problems is many workflows. In my case, only the process I was working on had this problem while all other workflows were working without a problem. So, I could assume that the problem is limited to something specific in my mapping. I looked at the log and find out that this issue happens immediately after the following
message

Message Code: PMJVM_42009
Message: [INFO] Created Java VM successfully.

This led me to the conclusion that this problem is Java related.

It is possible that the Oracle libraries/symbols are loaded first and there is a conflict of symbols between Java libraries and Oracle client libraries.
In this case, Informatica suggest to set up the following environment variables:
LD_PRELOAD = $INFA_HOME/java/jre/lib/amd64/libjava.so
LD_LIBRARAY_PATH = $INFA_HOME/java/jre/lib/amd64:$LD_LIBRARAY_PATH

The problem with setting environment variables is the fact that it is system wide change and there is always a risk of breaking something. So, changing a setting in the workflow or session level is always preferred. This way, if something breaks, the problem is contained.

While looking for a workflow level solution I came across the following article suggesting that the problem is a result of mismatch between the Linux Java and the PowerCenter version.

“This is an issue with the Oracle JDK 1.6 in combination with RHEL Linux. Java txn, and in turn the session is impacted.

The JDK has a Just In Time (JIT) compiler component that works towards optimizing Java bytecode to underlying native assembly code. Assembly code is native and is faster than Java byte code.

The crash occurs in this JIT generated assembly code, which means that native debuggers will not be able to resolve the symbols.”

There are two solutions for the problem, one is, forcing the session (DTM) to spawn Java transformation with JDK 1.7 rather than JDK 1.6. There are details in the above link but, again, this is major change with many possible implications.

The second (and simpler) option is Disabling JDK’s Just In Time compiler

You can do this at a session level by adding the following JVM option as custom property to IS.
JVMOption1 = -Xint
In Edit task – Config Object – Custom Properties
It is important to understand that this means that there is no optimization performed on the Java transformation’s bytecode and a performance hit is expected in the Java transformation alone. Therefore, if the java transformation is performance sensitive, you might want to think about the first solution.

Informatica – [PCSF_46002] Failed to communicate with the server

After serving me well for several years, my Informatica client suddenly stopped working.
I got the following error when trying to connect to one of my domains.

Error: [PCSF_46002] Failed to communicate with the server at [http://[server]:[port]/coreservices/DomainService].

Since it took me a good few hours to find the solution for the problem, I will write all my failed attempts and the actual solution in my case. I hope it will save someone (probably me) some time in the future. (TL;DR – delete proxy environment variables)

Verify that server is working– In my case, the domain I was trying to connect to was Production, the first thing I did is to verify that the server is up and that the integration service is running. Very quickly I found that all the target tables still get updated as expected. Knowing that the server is running, allowed to proceed with less stress.

Verify that Informatica Administration console is available.
Since I was able to connect to my Informatica Administration Web Console on http://server:port/administrator/#admin I came to the conclusion that the problem is on the client side. Luckily I was able to confirm it by connecting to the domain using one of my colleague workstation. Only after that, I understood that I am not able to connect to any domain, so the problem is on my machine without a doubt.

On the client, I followed in formatica KB suggestion at https://kb.informatica.com/solution/23/Pages/61/509256.aspx and clicked on ” Configure Domains” and tried to delete the domain entries and recreate them. This time I got almost identical message

Unable to save information for domain .
Error: [PCSF_46002] Failed to communicate with the server at [http://[server]:[port]/coreservices/DomainService].

Since the message started with “Unable to save”, the next suspect was a problem in the OS. I verified that I have permission to write to the INFA_HOME location (\clients\PowerCenterClient\domains.infa in my case). After some more googling I also tried to manually write/edit the domains.infa file and created INFA_DOMAINS_FILE environment variable pointing to it. Still, no success.

The next step is to check network connectivity. I verified that I am able to ping or telnet the server. Since pinging the computer was successful, my only remaining option was to verify that no firewall or proxy is blocking me. While searching the knowledge base on how to check the proxy setting, I came across this article
https://kb.informatica.com/solution/18/Pages/121821.aspx
from which I learned that setting the following environment variables can interfere with the efforts to connect to the domain.
• PROXY
• http_proxy
• https_proxy
Only then, I remembered that as part of the Docker installation on my PC, I did set the HTTP_PROXY environment variable. I deleted the HTTP_PROXY environment variable (on windows: My computer – properties – Advanced system settings – Environment Variables) and after a restart my Informatica client came back to life. I wish the error messages were a little clearer but I hope this blog post will help.

Short and sweet way to increase Sequence value to be higher than the max value in the table

Many times during the development process, there is a need to copy /export/backup table data between environments or stages. While it is easy to export the data using SQL inserts, it is also very easy to forget the sequences related to the tables.
Sometimes, this practice results in DUP_VAL_ON_INDEX exception (ORA-00001) when you use NEXTVAL in your insert. The tables may already include high values but the sequence is lower than the max values in the table.
Therefore it is important to remember to increment the sequences to a value higher than the max value in the table.

The most common way to increase the sequence value to the next value in the table is to:

1) alter the sequence increment to the difference between the current value of the sequence and the max value in the table.

ALTER SEQUENCE sequence-name INCREMENT BY 500;

2) Issue a dummy nextval request

SELECT sequence-name.nextval FROM dual;

3) Alter the sequence increment value back to the original increment

ALTER SEQUENCE sequence-name INCREMENT BY 1;

Another option of course is to drop the sequence and recreate it with the required value but this may invalidate objects referencing to it. Also, it will force you to re-apply all grants and permissions.
Both above methods will work but not everyone have permissions to change the sequence. In addition, DDL operations are always risky. So, If you are looking for a short and sweet (or quick and dirty) way to increase the sequence value, look no further.

SELECT  level, sequence-name.NEXTVAL
FROM  dual 
CONNECT BY level <= (SELECT MAX(column-using-the-SEQUENCE  ) FROM table-name);