Category Archives: Uncategorized

Informatica – FATAL ERROR : Signal Received: SIGSEGV (11)

While running a relatively simple Informatica mapping I got the following error.
Severity: FATAL
Message Code:
Message: *********** FATAL ERROR : Caught a fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Aborting the DTM process due to fatal signal or exception. ***********
Message Code:
Message: *********** FATAL ERROR : Signal Received: SIGSEGV (11) ***********

Basically, errors like this means that something went terribly wrong and the Informatica server did not know how to handle it. In many cases you will need to contact Informatica support to help you with a solution.

Various reason can cause this kind of error:

tnsnames.ora file used by the Oracle Client installed on the PowerCenter server machine gets corrupted.
If this is the case, other workflows using the same tns entry would probably break as well. The solution is to recreate the tnsnames.ora file (or contact Oracle support).

Another cause for the SIGSEGV Fatal error crash is a problem with the DB drivers on the Informatica server. Many user report this issue with Teradata and Oracle

The above reasons would probably be system-wide and will cause problems is many workflows. In my case, only the process I was working on had this problem while all other workflows were working without a problem. So, I could assume that the problem is limited to something specific in my mapping. I looked at the log and find out that this issue happens immediately after the following
message

Message Code: PMJVM_42009
Message: [INFO] Created Java VM successfully.

This led me to the conclusion that this problem is Java related.

It is possible that the Oracle libraries/symbols are loaded first and there is a conflict of symbols between Java libraries and Oracle client libraries.
In this case, Informatica suggest to set up the following environment variables:
LD_PRELOAD = $INFA_HOME/java/jre/lib/amd64/libjava.so
LD_LIBRARAY_PATH = $INFA_HOME/java/jre/lib/amd64:$LD_LIBRARAY_PATH

The problem with setting environment variables is the fact that it is system wide change and there is always a risk of breaking something. So, changing a setting in the workflow or session level is always preferred. This way, if something breaks, the problem is contained.

While looking for a workflow level solution I came across the following article suggesting that the problem is a result of mismatch between the Linux Java and the PowerCenter version.

“This is an issue with the Oracle JDK 1.6 in combination with RHEL Linux. Java txn, and in turn the session is impacted.

The JDK has a Just In Time (JIT) compiler component that works towards optimizing Java bytecode to underlying native assembly code. Assembly code is native and is faster than Java byte code.

The crash occurs in this JIT generated assembly code, which means that native debuggers will not be able to resolve the symbols.”

There are two solutions for the problem, one is, forcing the session (DTM) to spawn Java transformation with JDK 1.7 rather than JDK 1.6. There are details in the above link but, again, this is major change with many possible implications.

The second (and simpler) option is Disabling JDK’s Just In Time compiler

You can do this at a session level by adding the following JVM option as custom property to IS.
JVMOption1 = -Xint
In Edit task – Config Object – Custom Properties
It is important to understand that this means that there is no optimization performed on the Java transformation’s bytecode and a performance hit is expected in the Java transformation alone. Therefore, if the java transformation is performance sensitive, you might want to think about the first solution.

Informatica – [PCSF_46002] Failed to communicate with the server

After serving me well for several years, my Informatica client suddenly stopped working.
I got the following error when trying to connect to one of my domains.

Error: [PCSF_46002] Failed to communicate with the server at [http://[server]:[port]/coreservices/DomainService].

Since it took me a good few hours to find the solution for the problem, I will write all my failed attempts and the actual solution in my case. I hope it will save someone (probably me) some time in the future. (TL;DR – delete proxy environment variables)

Verify that server is working– In my case, the domain I was trying to connect to was Production, the first thing I did is to verify that the server is up and that the integration service is running. Very quickly I found that all the target tables still get updated as expected. Knowing that the server is running, allowed to proceed with less stress.

Verify that Informatica Administration console is available.
Since I was able to connect to my Informatica Administration Web Console on http://server:port/administrator/#admin I came to the conclusion that the problem is on the client side. Luckily I was able to confirm it by connecting to the domain using one of my colleague workstation. Only after that, I understood that I am not able to connect to any domain, so the problem is on my machine without a doubt.

On the client, I followed in formatica KB suggestion at https://kb.informatica.com/solution/23/Pages/61/509256.aspx and clicked on ” Configure Domains” and tried to delete the domain entries and recreate them. This time I got almost identical message

Unable to save information for domain .
Error: [PCSF_46002] Failed to communicate with the server at [http://[server]:[port]/coreservices/DomainService].

Since the message started with “Unable to save”, the next suspect was a problem in the OS. I verified that I have permission to write to the INFA_HOME location (\clients\PowerCenterClient\domains.infa in my case). After some more googling I also tried to manually write/edit the domains.infa file and created INFA_DOMAINS_FILE environment variable pointing to it. Still, no success.

The next step is to check network connectivity. I verified that I am able to ping or telnet the server. Since pinging the computer was successful, my only remaining option was to verify that no firewall or proxy is blocking me. While searching the knowledge base on how to check the proxy setting, I came across this article
https://kb.informatica.com/solution/18/Pages/121821.aspx
from which I learned that setting the following environment variables can interfere with the efforts to connect to the domain.
• PROXY
• http_proxy
• https_proxy
Only then, I remembered that as part of the Docker installation on my PC, I did set the HTTP_PROXY environment variable. I deleted the HTTP_PROXY environment variable (on windows: My computer – properties – Advanced system settings – Environment Variables) and after a restart my Informatica client came back to life. I wish the error messages were a little clearer but I hope this blog post will help.

A short reminder: Why OSB Database Adapter returns ORA-01403 when the function return results on SQLPlus

In OSB (Oracle Service Bus) a Database Adapter is a great tool for wrapping a database function and using it as part of a complex process or providing it as a web service. A common practice is that the function returns a table of user defined types (UDT) that can can be read by the adapter.

During the creation of the DB adapter, the oracle user that is used by JDeveloper needs to be granted with permissions for the function and the returned user defined types (the object and the table of the object).

However, during runtime, the OSB Database Adapter somtimes fails with BEA-382500 OSB Service Callout action received SOAP Fault response. Inside the results you can find the following line:

” Cause: java.sql.SQLException: ORA-01403: no data found … … nested exception is: BINDING.JCA-11812 or BINDING.JCA-11811

This message can be misleading because many times , running the same function with same user in SQLPlus will return results. Luckily, the solution is simple, in order for the Data Base Adapter to work properly, all you need to do is to grant the user running the db adapter with permission for ALL types used by the function (even internally)
for some reason (maybe a difference between the Oracle Clients), the DB adapter requires permissions on the internal types that are used inside the function while SQLPlus (Toad or plsql Developer) can work without these permissions.

PLS-00801: internal error [string]

apparently, pls-00801: internal error [phdcsql_print_kge_errors::parm pos] is one of the most annoying oracle errors. off course, phdcsql_print_kge_errors::parm pos is just an example that can be replcaed with any other undecipherable set characters.

Oracle’s official error description is:

Cause: This is a generic internal error that might occur during compilation or execution. The first parameter is the internal error number.

Action: Report this error as a bug to your Customer Support representative.

While it might be very tempting to contact Oracle Support, many users will prefer to try and solve the problem without opening a service request and waiting for an answer.
Basically, what this error means is that there is a problem with this query but Oracle is having problem describing the error. The result is a generic (and not really helpful ) error message.
Like any generic error description, many reasons can cause the problem. The only solution is trying to identify the problem yourself by trial and error.

Copy the query part of the code to a separate window and try to tweak it without changing the basics: change tables order, change aliases names, comment what you can or run it without variables. Even if this will not solve the problem, sometimes this can provide a more informative error message.
Using this method I have found several issues that caused pls-00801 internal error. I am writing a few in order to provide some list of things that can cause this error:

1) The query used a remote database using a db link and the database link user did not have proper permissions.
2) Extra not visible control characters that were added to the code during a problematic copy-paste from the web or from Word.
3) Extra characters that might have different meaning in oracle like:&,”,_ etc.
4) Extra spaces
5) Invalid objects on remote database
6) Several known Oracle bugs (search metalink for the error)
7) An error on a remote database can not be displayed.
8) Your entry here …

Of course, there are countless other reasons for pls-00801. If you encounter any other reason please share it in the comments to help others.

Why is Oracle query not using my index? A cheklist

One of the most common question, when it comes to performance tuning is “Why oracle is not using my index?”. This question, together with her twin question, “Why Oracle is choosing full table scan even when the table is indexed?” cover large part of the performance tuning challenges.

Even well experienced developers and DBA’s can sometimes miss a very obvious reason causing the optimizer not to choose the expected plan. After finding myself time and time again looking for explanations for poor query performance i decided to create a small checklist that i can use to methodically check until I find the issue that is causing the optimizer not to use the index.

This is far from a complete list and the post is probably going to be constantly updated as I encounter ( ok , be reminded of) other performance killing reasons.

1) Are indexed column being changed ?
For example, the index will be used only if the value of the index is not changed in the where clause. Therefore, every manipulation on the “left side” of the where clause will prevent the index from being used. In order to check this you can add the /*+ INDEX(a, i_empno) */ hint if you can not see it being used in the explain plan then there is something in the query that prevent it from being used.

2) Are you using NOT?
The optimizer will not use the index if the query is based on the indexed columns and it contains NOT EQUAL and NOT IN.
3) Are you using LIKE (‘%% ‘) ?
if you use wildcard query with “%” in front of the string, the column index is not being used and a full table scan is required since the % can be replaces by any string. Therefore the optimizer needs to search the contents of every row of that field.
4) Are you using IS (NOT) NULL ?
– Null values are not included in the index. However, this could be worked around by using nvl when creating the index (function based index), adding the PK to the index or even adding a constant to the column: create index emp_dob_idx on emp (date_of_birth,1)
5) Are you using the leading columns in a concatenated index?
remember to put the column with the highest unique values first to make the result set smaller.
6) Are you selecting from a view?
Make sure that that the base tables have appropriate indexes
7) Are the statistics relevant and valid?
8) Does the index exists as all ?
you will be surprised how many times this is the problem
9) Is the query expected to return large portion of the table?
In this case the optimizer will prefer a full scan
10) If you are using subquery you must use only IN or = (you can workaround this by changing it to a join or a function )
11) Did you do a lot of changes on the table recently?
A large number of dml operations on the table might cause the statistics to become stale and stop using the index
12) It is possible that the data on the table is skewed?
The optimizer might expect normal distribution of values between the maximum and minimum values and might choose a wrong plan if this is not the case. For example, using 01/01/0001 or 31/12/4000 as a null values might confuse the optimizer
13) Does your query actually trying to use the index?
is the where clause contains the indexed column?
14) High degree of parallelism. High degree of parallelism skews the optimizer toward full table scans. select DEGREE from dba_tables where table_name='table-name' and owner='owner-name';
15) A full scan will be cheaper than using an index if the table is small.
16) Does it use Other indexes?
You may have other indexes that Oracle perceives as being “better” for the query.

17) Are you implicitly casting types?
Oracle sometimes cast implicitly. For example it might cast varchar2 to number when if the actual values allow (also for dates)
While it might work for a query, it will not use an index when you are joining a table on fields with different types.

18) Wrong Parameters:
optimizer_index_cost_adj – low value reduce the price of indexes use
all_rows access method – The first_rows optimizer mode is more likely to use an index than the all_rows mode.

Bonus advice:
Add the /*+ INDEX(a, col-name) */ hint. if you still can not see the index being used in the explain plan then there is something in the query that prevent it from being used.

Add the /*+ RULE */ hint. If the query uses the index with a RULE hint, you know that the problem is related to the cost-based optimizer (CBO)