A common way to invoke rest services is using informatica http transformation. This is an easy solution and it works great. You can get the response in the HTTPOUTPUT field you can write the response to the Database or parse it using Data Process Mappllet, or even simple string functions.
While this works great when the service returns a successful response, by default, a failure response ( 4xx and 5xx http errors for example) the HTTPOUT port is empty and an error is raised.
So, what can you do if you want to write the failure response to the Data Base and continue to the next row? This way, the task can be completed and the failed records can be logged or retried.
The simple solution is to Set the Custom property IncludeErrorResponsesInHTTPTransformation=Yes
This can be done in the session level but also in the integration server level.
After setting the custom property Workflow Manager -> Edit Tasks -> Config Object -> Custom Properties -> Add attribute IncludeErrorResponsesInHTTPTransformation and write Yes in Value field
The error response will come out of the HTTP OUT port of the HTTP transformation just like a successful response.
” This error occurs because the SQL query sent to the database for the Source Qualifier only contains the Join statement – this is caused by putting the JOIN-statement in the SQL Query property instead of in the User Defined Join property of the Source Qualifier.”
This is a little confusing because the message talks about joins when in many cases, like the above example, there is no join.
Still the problem is the same and all you need to do is to move the query from the SQL QUERY to the USER DEFINED JOIN section
ORA-02019: connection description for remote database not found
TL;DR
When you try to create a materialized view on a different schema and the query selects from more than one remote object you get a misleading error ORA-02019: connection description for remote database not found.
Your options:
change the query to use one objects.
Connect as the other user and create the materialized view
Ugly workaround – create a materialized view for one of the remote objects and use it in the query.
Long version:
Lately, I was trying to create a materialized view. The problem was that the query behind the view was using a sub query. As a result, when I tried to run the CREATE MATERIALIZED VIEW command, I got the famous ORA-22818: subquery expressions not allowed here.
1
2
3
4
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV ASSELECT a.employee_id,a.department_id ,(SELECT department_name FROM hr.departments@orcl d WHERE
d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a ;
create MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV as
select a.employee_id,a.department_id ,(select department_name from hr.departments@orcl d where
d.department_id = a.department_id ) department_name
from hr.employees@orcl a ;
According to Oracle error codes, creating a materialized view containing a sub query is not supported
1
2
3
4
5
ORA-22818: subquery expressions NOT allowed here
22818. 00000 -"subquery expressions not allowed here"*Cause: An attempt was made TOUSE a subquery expression WHERE these
are NOT supported.*Action: Rewrite the statement WITHOUT the subquery expression.
ORA-22818: subquery expressions not allowed here
22818. 00000 - "subquery expressions not allowed here"
*Cause: An attempt was made to use a subquery expression where these
are not supported.
*Action: Rewrite the statement without the subquery expression.
As a side note, this way of building a simple materialized view on top of a regular view is a good practice since it will allow you to change the materialized view without having to drop and recreate it.
The first part of the process went well and the view was created and worked as expected
1
2
3
4
5
CREATEVIEW u1.JOIN_EMP_DEPT_V ASSELECT a.employee_id,a.department_id ,(SELECT department_name FROM hr.departments@orcl d
WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a ;
create VIEW u1.JOIN_EMP_DEPT_V as
select a.employee_id,a.department_id ,
(select department_name from hr.departments@orcl d
where d.department_id = a.department_id ) department_name
from hr.employees@orcl a ;
However, once I tried to create the materialized view on the application schema using my private schema, I got the following message
1
2
3
4
5
6
conn U2/U2
CREATE MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV ASSELECT*FROM u1.JOIN_EMP_DEPT_V;
ORA-02019: connection description FOR remote DATABASENOT found
conn U2/U2
create MATERIALIZED VIEW u1.JOIN_EMP_DEPT_MV as
select * from u1.JOIN_EMP_DEPT_V;
ORA-02019: connection description for remote database not found
This message is completely misleading. We know that the db link exists and we know it works. After all, the view was created using this db link.
1
2
CREATEORREPLACEVIEW u1.JOIN_EMP_DEPT_V ASSELECT a.employee_id,a.department_id FROM hr.employees@orcl a ;
create or replace VIEW u1.JOIN_EMP_DEPT_V as
select a.employee_id,a.department_id from hr.employees@orcl a ;
One remote object – Works!
In addition, user U1 can create the the materialized view without a problem
Sadly, according to Oracle, this is not bug. Basically, they claim that this is the way it works and you can ask for an ENHANCEMENT REQUEST.
If creating the view with only one remote object or connecting as the other user are not an option , one ugly workaround would be to create a materialized view for one of the tables and create the join it with one remote object
1
2
3
4
5
6
7
CREATE MATERIALIZED VIEW u1.JUST_DEPT_MV ASSELECT*FROM hr.departments@orcl;
CREATEORREPLACEVIEW u1.JOIN_EMP_DEPT_V ASSELECT a.employee_id,a.department_id ,(SELECT department_name FROM u1.JUST_DEPT_MV d WHERE d.department_id = a.department_id ) department_name
FROM hr.employees@orcl a ;
create MATERIALIZED VIEW u1.JUST_DEPT_MV as
select * from hr.departments@orcl;
create or replace VIEW u1.JOIN_EMP_DEPT_V as
select a.employee_id,a.department_id ,(select department_name from u1.JUST_DEPT_MV d where d.department_id = a.department_id ) department_name
from hr.employees@orcl a ;
A short reminder, this message may look intimidating but this is one of the rare cases when oracle codes actually explains the problem well:
*Cause: The XQuery sequence passed in had more than one item.
*Action: Correct the XQuery expression to return a single item sequence.
Basically, it only means that there is more than one instance of the tag you are looking at the level you are looking. Or, in plain English, duplicate nodes.
for example:
(pardon the poor indentation, for some reason the code editor is failing in showing html tags)
While running a relatively simple Informatica mapping I got the following error.
Severity: FATAL
Message Code:
Message: *********** FATAL ERROR : Caught a fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Aborting the DTM process due to fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Signal Received: SIGSEGV (11) ***********
Basically, errors like this means that something went terribly wrong and the Informatica server did not know how to handle it. In many cases you will need to contact Informatica support to help you with a solution.
Various reason can cause this kind of error:
tnsnames.ora file used by the Oracle Client installed on the PowerCenter server machine gets corrupted.
If this is the case, other workflows using the same tns entry would probably break as well. The solution is to recreate the tnsnames.ora file (or contact Oracle support).
Another cause for the SIGSEGV Fatal error crash is a problem with the DB drivers on the Informatica server. Many user report this issue with Teradata and Oracle
The above reasons would probably be system-wide and will cause problems is many workflows. In my case, only the process I was working on had this problem while all other workflows were working without a problem. So, I could assume that the problem is limited to something specific in my mapping. I looked at the log and find out that this issue happens immediately after the following
message
Message Code: PMJVM_42009
Message: [INFO] Created Java VM successfully.
This led me to the conclusion that this problem is Java related.
It is possible that the Oracle libraries/symbols are loaded first and there is a conflict of symbols between Java libraries and Oracle client libraries.
In this case, Informatica suggest to set up the following environment variables:
LD_PRELOAD = $INFA_HOME/java/jre/lib/amd64/libjava.so
LD_LIBRARAY_PATH = $INFA_HOME/java/jre/lib/amd64:$LD_LIBRARAY_PATH
The problem with setting environment variables is the fact that it is system wide change and there is always a risk of breaking something. So, changing a setting in the workflow or session level is always preferred. This way, if something breaks, the problem is contained.
While looking for a workflow level solution I came across the following article suggesting that the problem is a result of mismatch between the Linux Java and the PowerCenter version.
“This is an issue with the Oracle JDK 1.6 in combination with RHEL Linux. Java txn, and in turn the session is impacted.
The JDK has a Just In Time (JIT) compiler component that works towards optimizing Java bytecode to underlying native assembly code. Assembly code is native and is faster than Java byte code.
The crash occurs in this JIT generated assembly code, which means that native debuggers will not be able to resolve the symbols.”
There are two solutions for the problem, one is, forcing the session (DTM) to spawn Java transformation with JDK 1.7 rather than JDK 1.6. There are details in the above link but, again, this is major change with many possible implications.
The second (and simpler) option is Disabling JDK’s Just In Time compiler
You can do this at a session level by adding the following JVM option as custom property to IS. JVMOption1 = -Xint
In Edit task – Config Object – Custom Properties
It is important to understand that this means that there is no optimization performed on the Java transformation’s bytecode and a performance hit is expected in the Java transformation alone. Therefore, if the java transformation is performance sensitive, you might want to think about the first solution.