Author Archives: Asaf Tal

Short reminder: how to handle Informatica http transformation failure response

A common way to invoke rest services is using informatica http transformation. This is an easy solution and it works great. You can get the response in the HTTPOUTPUT field you can write the response to the Database or  parse it using Data Process Mappllet,  or even simple string functions.

While this works great when the service returns a successful response, by default, a failure response  ( 4xx and 5xx http errors for example) the HTTPOUT port is empty and an error is raised.

So, what can you do if you want to write the failure response to the Data Base and continue to the next row? This way, the task can be completed and the failed records can be logged or retried.

The simple solution is to Set the Custom property IncludeErrorResponsesInHTTPTransformation=Yes

This can be done in the session level but also in the integration server level.

After setting the custom property Workflow Manager -> Edit Tasks -> Config Object -> Custom Properties -> Add attribute IncludeErrorResponsesInHTTPTransformation and write Yes in Value field

The error response will come out of the HTTP OUT port of the HTTP transformation just like a  successful response.

Custom Property IncludeErrorResponsesInHTTPTransformation

Short reminder for Informatica ORA-24333: zero iteration count

When you run an Informatica  workflow and you get the following error message

ORA-24333: zero iteration count

Database driver error...
Function Name : Execute
SQL Stmt :       TABLE_NAME.COL1='A'
Oracle Fatal Error
Database driver error...
Function Name : Execute
SQL Stmt TABLE_NAME.COL1='A'
Oracle Fatal Error].

The Informatica error description in https://kb.informatica.com/solution/18/Pages/121171.aspx  say

” This error occurs because the SQL query sent to the database for the Source Qualifier only contains the Join statement – this is caused by putting the JOIN-statement in the SQL Query property instead of in the User Defined Join property of the Source Qualifier.”

This is a little confusing because the message talks about joins when in many cases, like the above example, there is no join.

 

Still the problem is the same and all you need to do is to move the query from the SQL QUERY to the USER DEFINED JOIN section

ORA-24333

Weird oracle bug – ORA-02019 when creating MATERIALIZED VIEW

ORA-02019: connection description for remote database not found

TL;DR
When you try to create a materialized view on a different schema and the query selects from more than one remote object you get a misleading error ORA-02019: connection description for remote database not found.
Your options:
change the query to use one objects.
Connect as the other user and create the materialized view
Ugly workaround – create a materialized view for one of the remote objects and use it in the query.

Long version:

Lately, I was trying to create a materialized view. The problem was that the query behind the view was using a sub query. As a result, when I tried to run the CREATE MATERIALIZED VIEW command, I got the famous ORA-22818: subquery expressions not allowed here.

1
2
3
4
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT a.employee_id,a.department_id ,(SELECT department_name FROM hr.departments@orcl d WHERE 
                                       d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

According to Oracle error codes, creating a materialized view containing a sub query is not supported

1
2
3
4
5
ORA-22818: subquery expressions NOT allowed here
22818. 00000 -  "subquery expressions not allowed here"
*Cause:    An attempt was made TO USE a subquery expression WHERE these
           are NOT supported.
*Action:   Rewrite the statement WITHOUT the subquery expression.

In order to Rewrite the statement without the subquery expression I used the plsql guru Steven Feuerstein advice and created a regular view with the complex query and tried to create a materialized on top of the view.

As a side note, this way of building a simple materialized view on top of a regular view is a good practice since it will allow you to change the materialized view without having to drop and recreate it.

The first part of the process went well and the view was created and worked as expected

1
2
3
4
5
CREATE  VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id ,
                                     (SELECT department_name FROM hr.departments@orcl d 
                                      WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

However, once I tried to create the materialized view on the application schema using my private schema, I got the following message

1
2
3
4
5
6
conn U2/U2
 
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT * FROM u1.JOIN_EMP_DEPT_V;
 
ORA-02019: connection description FOR remote DATABASE NOT found

This message is completely misleading. We know that the db link exists and we know it works. After all, the view was created using this db link.

1
2
CREATE OR REPLACE VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id FROM hr.employees@orcl a  ;

One remote object – Works!

In addition, user U1 can create the the materialized view without a problem

1
2
3
conn U1/U1
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV AS 
SELECT * FROM JOIN_EMP_DEPT_V;

also works!

Apparently, I am not the only one encountering the ORA-02019: connection description for remote database not found issue.

Creating materialized view of another user via private db link succeeded by selecting single table while failed by selecting more than one table (Doc ID 2305974.1)

Sadly, according to Oracle, this is not bug. Basically, they claim that this is the way it works and you can ask for an ENHANCEMENT REQUEST.

If creating the view with only one remote object or connecting as the other user are not an option , one ugly workaround would be to create a materialized view for one of the tables and create the join it with one remote object

1
2
3
4
5
6
7
CREATE MATERIALIZED VIEW u1.JUST_DEPT_MV AS 
SELECT * FROM hr.departments@orcl;
 
 
CREATE OR REPLACE VIEW u1.JOIN_EMP_DEPT_V AS 
SELECT a.employee_id,a.department_id ,(SELECT department_name FROM u1.JUST_DEPT_MV d WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a  ;

A short reminder – ORA-19279: XPTY0004 – XQuery dynamic type mismatch: expected singleton sequence – got multi-item sequence

1
2
3
 
ORA-19279: XPTY0004 - XQuery dynamic TYPE mismatch: expected singleton SEQUENCE - got multi-item SEQUENCE
19279. 00000 -  "XQuery dynamic type mismatch: expected singleton sequence - got multi-item sequence"

A short reminder, this message may look intimidating but this is one of the rare cases when oracle codes actually explains the problem well:

*Cause: The XQuery sequence passed in had more than one item.
*Action: Correct the XQuery expression to return a single item sequence.

Basically, it only means that there is more than one instance of the tag you are looking at the level you are looking. Or, in plain English, duplicate nodes.

for example:
(pardon the poor indentation, for some reason the code editor is failing in showing html tags)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
 SELECT ITEMS
FROM
xmltable(
 '*'
PASSING XMLTYPE('<ROWSET>
<ITEMS>
<ITEM>AAA</ITEM>
</ITEMS>
<ITEMS>
<ITEM>BBB</ITEM>
</ITEMS>
</ROWSET>')
COLUMNS
ITEMS varchar2(10) PATH '//*:ITEM'
)
;

ORA-19279: XPTY0004 – XQuery dynamic type mismatch: expected singleton sequence – got multi-item sequence

One solution, is to point the XQuery_string to the correct level. In our example, all you need to do is start our query at the CODES level.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
 SELECT ITEMS
FROM
xmltable(
 '//ITEMS'
PASSING XMLTYPE('<ROWSET>
<ITEMS>
<ITEM>AAA</ITEM>
</ITEMS>
<ITEMS>
<ITEM>BBB</ITEM>
</ITEMS>
</ROWSET>')
COLUMNS
ITEMS varchar2(10) PATH '//*:ITEM'
)
;

CODES
———-
AAA
BBB

Informatica – FATAL ERROR : Signal Received: SIGSEGV (11)

While running a relatively simple Informatica mapping I got the following error.
Severity: FATAL
Message Code:
Message: *********** FATAL ERROR : Caught a fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Aborting the DTM process due to fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Signal Received: SIGSEGV (11) ***********

Basically, errors like this means that something went terribly wrong and the Informatica server did not know how to handle it. In many cases you will need to contact Informatica support to help you with a solution.

Various reason can cause this kind of error:

tnsnames.ora file used by the Oracle Client installed on the PowerCenter server machine gets corrupted.
If this is the case, other workflows using the same tns entry would probably break as well. The solution is to recreate the tnsnames.ora file (or contact Oracle support).

Another cause for the SIGSEGV Fatal error crash is a problem with the DB drivers on the Informatica server. Many user report this issue with Teradata and Oracle

The above reasons would probably be system-wide and will cause problems is many workflows. In my case, only the process I was working on had this problem while all other workflows were working without a problem. So, I could assume that the problem is limited to something specific in my mapping. I looked at the log and find out that this issue happens immediately after the following
message

Message Code: PMJVM_42009
Message: [INFO] Created Java VM successfully.

This led me to the conclusion that this problem is Java related.

It is possible that the Oracle libraries/symbols are loaded first and there is a conflict of symbols between Java libraries and Oracle client libraries.
In this case, Informatica suggest to set up the following environment variables:
LD_PRELOAD = $INFA_HOME/java/jre/lib/amd64/libjava.so
LD_LIBRARAY_PATH = $INFA_HOME/java/jre/lib/amd64:$LD_LIBRARAY_PATH

The problem with setting environment variables is the fact that it is system wide change and there is always a risk of breaking something. So, changing a setting in the workflow or session level is always preferred. This way, if something breaks, the problem is contained.

While looking for a workflow level solution I came across the following article suggesting that the problem is a result of mismatch between the Linux Java and the PowerCenter version.

“This is an issue with the Oracle JDK 1.6 in combination with RHEL Linux. Java txn, and in turn the session is impacted.

The JDK has a Just In Time (JIT) compiler component that works towards optimizing Java bytecode to underlying native assembly code. Assembly code is native and is faster than Java byte code.

The crash occurs in this JIT generated assembly code, which means that native debuggers will not be able to resolve the symbols.”

There are two solutions for the problem, one is, forcing the session (DTM) to spawn Java transformation with JDK 1.7 rather than JDK 1.6. There are details in the above link but, again, this is major change with many possible implications.

The second (and simpler) option is Disabling JDK’s Just In Time compiler

You can do this at a session level by adding the following JVM option as custom property to IS.
JVMOption1 = -Xint
In Edit task – Config Object – Custom Properties
It is important to understand that this means that there is no optimization performed on the Java transformation’s bytecode and a performance hit is expected in the Java transformation alone. Therefore, if the java transformation is performance sensitive, you might want to think about the first solution.