Tag Archives: Oracle

Informatica – FATAL ERROR : Signal Received: SIGSEGV (11)

While running a relatively simple Informatica mapping I got the following error.
Severity: FATAL
Message Code:
Message: *********** FATAL ERROR : Caught a fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Aborting the DTM process due to fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Signal Received: SIGSEGV (11) ***********

Basically, errors like this means that something went terribly wrong and the Informatica server did not know how to handle it. In many cases you will need to contact Informatica support to help you with a solution.

Various reason can cause this kind of error:

tnsnames.ora file used by the Oracle Client installed on the PowerCenter server machine gets corrupted.
If this is the case, other workflows using the same tns entry would probably break as well. The solution is to recreate the tnsnames.ora file (or contact Oracle support).

Another cause for the SIGSEGV Fatal error crash is a problem with the DB drivers on the Informatica server. Many user report this issue with Teradata and Oracle

The above reasons would probably be system-wide and will cause problems is many workflows. In my case, only the process I was working on had this problem while all other workflows were working without a problem. So, I could assume that the problem is limited to something specific in my mapping. I looked at the log and find out that this issue happens immediately after the following
message

Message Code: PMJVM_42009
Message: [INFO] Created Java VM successfully.

This led me to the conclusion that this problem is Java related.

It is possible that the Oracle libraries/symbols are loaded first and there is a conflict of symbols between Java libraries and Oracle client libraries.
In this case, Informatica suggest to set up the following environment variables:
LD_PRELOAD = $INFA_HOME/java/jre/lib/amd64/libjava.so
LD_LIBRARAY_PATH = $INFA_HOME/java/jre/lib/amd64:$LD_LIBRARAY_PATH

The problem with setting environment variables is the fact that it is system wide change and there is always a risk of breaking something. So, changing a setting in the workflow or session level is always preferred. This way, if something breaks, the problem is contained.

While looking for a workflow level solution I came across the following article suggesting that the problem is a result of mismatch between the Linux Java and the PowerCenter version.

“This is an issue with the Oracle JDK 1.6 in combination with RHEL Linux. Java txn, and in turn the session is impacted.

The JDK has a Just In Time (JIT) compiler component that works towards optimizing Java bytecode to underlying native assembly code. Assembly code is native and is faster than Java byte code.

The crash occurs in this JIT generated assembly code, which means that native debuggers will not be able to resolve the symbols.”

There are two solutions for the problem, one is, forcing the session (DTM) to spawn Java transformation with JDK 1.7 rather than JDK 1.6. There are details in the above link but, again, this is major change with many possible implications.

The second (and simpler) option is Disabling JDK’s Just In Time compiler

You can do this at a session level by adding the following JVM option as custom property to IS.
JVMOption1 = -Xint
In Edit task – Config Object – Custom Properties
It is important to understand that this means that there is no optimization performed on the Java transformation’s bytecode and a performance hit is expected in the Java transformation alone. Therefore, if the java transformation is performance sensitive, you might want to think about the first solution.

A short reminder: Why OSB Database Adapter returns ORA-01403 when the function return results on SQLPlus

In OSB (Oracle Service Bus) a Database Adapter is a great tool for wrapping a database function and using it as part of a complex process or providing it as a web service. A common practice is that the function returns a table of user defined types (UDT) that can can be read by the adapter.

During the creation of the DB adapter, the oracle user that is used by JDeveloper needs to be granted with permissions for the function and the returned user defined types (the object and the table of the object).

However, during runtime, the OSB Database Adapter somtimes fails with BEA-382500 OSB Service Callout action received SOAP Fault response. Inside the results you can find the following line:

” Cause: java.sql.SQLException: ORA-01403: no data found … … nested exception is: BINDING.JCA-11812 or BINDING.JCA-11811

This message can be misleading because many times , running the same function with same user in SQLPlus will return results. Luckily, the solution is simple, in order for the Data Base Adapter to work properly, all you need to do is to grant the user running the db adapter with permission for ALL types used by the function (even internally)
for some reason (maybe a difference between the Oracle Clients), the DB adapter requires permissions on the internal types that are used inside the function while SQLPlus (Toad or plsql Developer) can work without these permissions.

monitor index usage (are my indexes being used at all?)

Indexes are the easiest solution for fast query performance improvement but the gain does not come without a cost. Indexes take up space and make inserts slower. In Addition, many times, the index is added to solve one slow query. If the query is not executed frequently, the performance improvement might not be worth the cost of making almost every insert and update slower. For example, even if the index is useful for a nightly or weekly report, the extra daily overhead might not be worth the few minutes it will save during the night.

therefore, it is important to know the if and how much the index is used. The tool for this job is index monitoring usage.

Basically the process is very simple. The first step is turning on monitoring since index monitoring usage is turned off by default. The next step is to let it run for a while. It is important to choose the monitoring period wisely as it supposed to correctly represent real life usage. After that you can turn off monitoring and query the usage view.

Turing on index monitoring for a specific index:

ALTER INDEX “YOUR-SCHMA-NAME”.”YOUR-INDEX-NAME” MONITORING USAGE;

If you get: ora-00054 resource busy and aquire with NOWAIT specified or timeout expired
This is because altering a table or an index requires an exclusive lock on the table therefore it will not work while DML operations are performed on the table

Turing on index monitoring for all indexes on all indexes for a specific schema:
declare
query_str varchar2(100);

begin

for i in ( select * from dba_indexes d where d.owner ='YOUR_SCHEMA' and index_type!='LOB' --and index_name='ASSIGNMENT_TERR'
) loop

query_str:='ALTER INDEX "' || i.owner || '"."' || i.index_name || '" NOMONITORING USAGE' ;
execute immediate query_str ;

end loop;

end;
Without adding the index_type!=’LOB’ you might get ORA-22864 cannot ALTER or DROP LOB indexes which means that you can not operate directly on a system-defined LOB index. You should perform operations on the corresponding LOB column.

Turing OFF index monitoring for a specific index:
ALTER INDEX "YOUR-SCHMA-NAME"."YOUR-INDEX-NAME" NOMONITORING USAGE;

You can analyze the results by running
select * from v$object_usage

several point to take into consideration:
don’t rush to drop the index if you see that it wasn’t used. It is possible that that it is used only once a month but it can still be crucial.

If you turn off monitoring and turning it on again, previous data is deleted. Therefore it might be wise to backup v$object_usage from time to time.

After you drop an index, the indesx data is deleted from v$object_usage

similar SQL/XML queries do not work the same way across oracle versions

If you just upgraded your Oracle version and you see inconsistency in behavior of your old SQL/XML (SQLX) queries, you might want to check if the actual XML structure is still the same.

Specifically, there is a problem with XMLAgg alias. The fact is that if you run the same code on Oracle 9 or 10 (up to 10.2.0.3)   and later versions, the same query returns different results.   If you used aliasing to name an aggregated collection you will notice that the alias disappeared in later versions.

For example, if you run the following code on Oracle 9


with t as (SELECT 'Emp 1' name FROM dual
UNION ALL
SELECT 'Emp 2' name FROM dual
UNION ALL
SELECT 'Emp 3' name FROM dual)
SELECT xmlelement("EMPLOYEES_DATA",
(SELECT xmlagg(
XMLELEMENT("EMPLOYEE_NAME", t.name)
)
FROM t) as "EMPLOYEES").extract('/')
from dual

the result will be :


<EMPLOYEES_DATA>
<EMPLOYEES>
<EMPLOYEE_NAME>Emp 1</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 2</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 3</EMPLOYEE_NAME>
</EMPLOYEES>
</EMPLOYEES_DATA>

However, if you run the same query on oracle 11, the result will be


<EMPLOYEES_DATA>
<EMPLOYEE_NAME>Emp 1</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 2</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 3</EMPLOYEE_NAME>
</EMPLOYEES_DATA>

Notice that the   <EMPLOYEES> tag is missing. Oracle simply ignores the alias.

Clearly, similar queries do not work the same way across oracle versions.   The problem is that it is dropping the alias silently. The query is not failing it just produce a different xml.

Unfortunately, to overcome this, a change in the code is required. You can easily add an additional XMLELEMENT.


with t as    (SELECT 'Emp 1' name FROM dual
UNION ALL
SELECT 'Emp 2' name FROM dual
UNION ALL
SELECT 'Emp 3' name FROM dual)
SELECT xmlelement("EMPLOYEES_DATA",
xmlelement ("EMPLOYEES",
(SELECT xmlagg(
XMLELEMENT("EMPLOYEE_NAME", t.name)
)
FROM   t)) as "WILL_BE_IGNORED_ORACLE_11").extract('/')
from dual;

This will produce the pre upgrade results:


<EMPLOYEES_DATA>
<EMPLOYEES>
<EMPLOYEE_NAME>Emp 1</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 2</EMPLOYEE_NAME>
<EMPLOYEE_NAME>Emp 3</EMPLOYEE_NAME>
</EMPLOYEES>
</EMPLOYEES_DATA>

Another way to use SYS_XMLAGG together with XMLFormat  but this will require code change anyway  so I believe that the best alternative will be to add an additional xmlelement.

How to resolve ORA-24247: network access denied by access control list (ACL)

If you are trying to use oracle to invoke a web service or to connect to a smtp service and you get the following error:

ORA-24247: network access denied by access control list (ACL)

The cause according to oracle is that

“No access control list (ACL) has been assigned to the target host or the privilege necessary to access the target host has not been granted to the user in the access control list.”

This error is common after an upgrade to Oracle 11. before oracle 11, using network resources via packages like utl_tcp, utl_smtp, utl_mail, utl_http, and utl_inaddr exposed the database to a serious security threat because once the user is granted with permission to use those packages there was no other limitation to connect to any computer.

Since Oracle 11, oracle introduced a fine grained access to network services using access control lists (ACL).

This new feature gave the DBA a better control on which user can connect to which computer

In order to solve ORA-24247 you will need to:

1) Create an acl (if it is not already created)

2) Add privileges to the user using the network resources

3) Assign the acl to a specific address

1) run the following query to check if an ACL exists

SELECT *

FROM dba_network_acls;

If the computer you are trying to connect to is not listed under host, you will need to create an acl:


begin

dbms_network_acl_admin.create_acl (

acl => 'http_permissions.xml', -- or any other name

description => 'HTTP Access',

principal => 'SCOTT', -- the user name trying to access the network resource

is_grant => TRUE,

privilege => 'connect',

start_date => null,

end_date => null

);

end;

/

commit;

This will create the acl and grant SCOTT the connect privilege.

2) IF the acl exists run the following query to verify the user is granted with the appropriate privilege

SELECT *

FROM dba_network_acl_privileges

where principal='SCOTT';

In order to use UTL_TCP, UTL_HTTP, UTL_SMTP, and UTL_MAIL the user will need the connect privilege

begin

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => 'http_permissions.xml',

principal => 'SCOTT',

is_grant => true,

privilege => 'connect');

end;

/

commit;

If you need to resolve a host name from a host IP you will need the resolve grant as well.

begin

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => 'http_permissions.xml',

principal => 'SCOTT',

is_grant => true,

privilege => 'resolve');

end;

/

commit;

3) The final step is to assign the acl to a specific target

BEGIN

dbms_network_acl_admin.assign_acl (

acl => 'http_permissions.xml',

host => 'NETWORK ADDRESS', /*can be computer name or IP , wildcards are accepted as well for example - '*.us.oracle.com'*/

lower_port => 80,

upper_port => 80

);

END;

It is important to note that only one ACL can be assigned to any host computer. If you assign a new acl to a target the old acl gets unassigned.

However, the old acl is not dropped. So, this could cause confusion because even if the acl was already assigned, it is possible that a new assignment overrode it.