A short reminder: Why OSB Database Adapter returns ORA-01403 when the function return results on SQLPlus

In OSB (Oracle Service Bus) a Database Adapter is a great tool for wrapping a database function and using it as part of a complex process or providing it as a web service. A common practice is that the function returns a table of user defined types (UDT) that can can be read by the adapter.

During the creation of the DB adapter, the oracle user that is used by JDeveloper needs to be granted with permissions for the function and the returned user defined types (the object and the table of the object).

However, during runtime, the OSB Database Adapter somtimes fails with BEA-382500 OSB Service Callout action received SOAP Fault response. Inside the results you can find the following line:

” Cause: java.sql.SQLException: ORA-01403: no data found … … nested exception is: BINDING.JCA-11812 or BINDING.JCA-11811

This message can be misleading because many times , running the same function with same user in SQLPlus will return results. Luckily, the solution is simple, in order for the Data Base Adapter to work properly, all you need to do is to grant the user running the db adapter with permission for ALL types used by the function (even internally)
for some reason (maybe a difference between the Oracle Clients), the DB adapter requires permissions on the internal types that are used inside the function while SQLPlus (Toad or plsql Developer) can work without these permissions.

How to schedule a job to run on a different hour on a specific date.

Oracle jobs are a simple way to schedule procedure and processes. However, sometimes the scheduling requirements can get really complicated. A very common request is to run a job on a specific hour on weekdays and not to run it on the weekend. This request can be easily delivered using the “WHAT” parameter of the dbms_job.submit procedure.

begin
  if to_char(sysdate, 'D') between 2 and 6 then
      my_procedure ;
  end if;
end;

However, what will you do if you need to run the procedure at a specific hour on Monday to Friday but on a different hour on the weekend?
In this case you will be happy to know that you can use CASE on the “INTERVAL” parameter of the dbms_job.submit procedure.

declare
  my_job number;
begin
  dbms_job.submit(job => my_job, 
    what => 'my_procedure ;',
    
     interval => 'case
                     when  to_number(to_char(sysdate, ''D'')) between 1 and 5 then 
                        TRUNC(SYSDATE+1) + 16 /24 
                     when  to_number(to_char(sysdate, ''D'')) between 6 and 7 then
                         TRUNC(SYSDATE+1) + 8 /24 
                  end'
                      
   
    );
end;

remember that this calculation is for NEXT run not for the current. Therefore, you will need to take into consideration that the timinig for Monday will be calculated on Sunday (I know it sounds obvious but you will be surprised how many time you can do the same mistake).

While the case clause can handle many calculations and complications, there is one caveat. The case statement can handle only 200 characters. This becomes a problem when you try to write a longer statement. For example:

case
  when  to_number(to_char(sysdate, 'D')) between 1 and 4 then 
      TRUNC(SYSDATE+1) + 16 /24 
  when  to_number(to_char(sysdate, 'D'))= 5 then
      TRUNC(SYSDATE+1) + 13 /24     
  when  to_number(to_char(sysdate, 'D'))= 6 then
      TRUNC(SYSDATE+2) + 16 /24          
  else
      TRUNC(SYSDATE+1) + 16 /24 
 end

if you try to submit a job with this statement, you will probably get the following error

ORA-12899: value too large for column "SYS"."JOB$"."INTERVAL#" (actual: 257, maximum: 200)
ORA-06512: at "SYS.DBMS_IJOB", line 290
ORA-06512: at "SYS.DBMS_JOB", line 159
ORA-06512: at line 4

You can workaround this problem if you use a function to calculate the INTERVAL value.

create or replace function get_next_run_date return date is

res date;
begin
select

case
  when  to_number(to_char(sysdate, 'D')) between 1 and 4 then
      TRUNC(SYSDATE+1) + 16 /24
  when  to_number(to_char(sysdate, 'D'))= 5 then
      TRUNC(SYSDATE+1) + 13 /24
  when  to_number(to_char(sysdate, 'D'))= 6 then
      TRUNC(SYSDATE+2) + 16 /24
  else
      TRUNC(SYSDATE+1) + 16 /24
 end
 into res
 from dual
  ;
return res;

end;

once the function is created, all you need to do is use it in your dbms_job submit under the INTERVAL parameter

declare
  my_job number;
begin
  dbms_job.submit(job => my_job, 
    what => 'my_procedure ;',
                     
    interval => 'get_next_run_date'
    );
end

A short reminder – SQLException: Unsupported character set (add orai18n.jar to your classpath)

After upgrading one of our testing environments to a newer oracle version, some of our services started to return the following error:

SQLException: Unsupported character set (add orai18n.jar to your classpath)

From the error message wording, one could suspect that this is an character set but after running

Select value from SYS.NLS_DATABASE_PARAMETERS where PARAMETER = 'NLS_CHARACTERSET'; 

We found out that both databases return the same results.

Therefore, it is obvious that there is a version mismatch between the client and the server. And this message is Oracle’s way of telling you that you need to upgrade your client. Since we are using WebLogic, the process is more complicated.

As suggested, we tried adding the orai18n.jar to the classpath but to no avail.

In addition we discovered that all the problematic service select xmltype from the Database so we assumed that this issue could be solved by some kind of encoding workaround.
We tried converting the character set (convert(col1,’IW8ISO8859P8′,’UTF-8′); ) or tweaking the NLS settings (alter session set nls_territory) only to discover that the simple solution is to convert the xmltype to string and reconverting it back to xmltype

select XMLELEMENT("RESULTS",
                    XMLELEMENT(  "Field",XMLATTRIBUTES(col_name_1 as "name") ,col_value_1 ),
                    XMLELEMENT(  "Field",XMLATTRIBUTES(col_name_2 as "name") ,col_value_2 )
                    
                    )
                                    
into xml --  xmltype 
from (select     
        'NAME1' as col_name_1
        ,'VALUE1' as  col_value_1
        ,'NAME2' as col_name_2
        ,'VALUE2' as  col_value_2 
 from dual   
        );

return xmltype(to_clob(xml.getStringVal()));

Logstash and Oracle – JDBC input settings

Lately, I began experimenting with elastic search, logstash and kibana. As a DBA, the first thing I wanted to try is Data base connectivity. Just like any new emerging technology, Oracle database is not the first priority on their implementation (and of course, documentation). There are much more examples for sexier (and free) databases like Postgresql or MySQL. As a result, there are only few examples and the configuaration is a real PITA.

While it is easy to find mysql database jdbc for logstash examples

input {
  jdbc {
    jdbc_driver_library => "mysql-connector-java-5.1.36-bin.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
    jdbc_user => "mysql"
    parameters => { "favorite_artist" => "Beethoven" }
    schedule => "* * * * *"
    statement => "SELECT * from songs where artist = :favorite_artist"
  }
}

It is much harder to find Oracle database jdbc logstash examples but I found something to start with at https://discuss.elastic.co/t/logstash-jdbc-input-oracle-settings/26996

jdbc {
      # The path to our downloaded jdbc driver saved mine in /bin, driver ojdbc6.jar also works. 
      #Best location can be /vendor folder
           jdbc_driver_library => "ojdbc7.jar"
      # ORACLE Driver Class
           jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
      # ORACLE jdbc connection string to our database, ORACLE  jdbc:oracle:thin:@hostname:PORT/SERVICE
           jdbc_connection_string => "jdbc:oracle:thin:@hostname:PORT/SERVICE"
      # The user and password we wish to execute our statement as
           jdbc_user => "DBUSERNAME"
           jdbc_password => "DBPASSWORD"
   
    # our SQL query
    statement => "select * from AWESOMETABLE"

    #optional extras I use 
      type => "Database"
      tags => ["awesome", "import"]
 }

Since I did not have ojdbc7.jar on my computer I chose to use ojdbc6.jar so my Oracle database jdbc logstash query came out like this:

input {
jdbc {
    jdbc_connection_string => "jdbc:oracle:thin:@\\my-host:1521:my-sid"
    jdbc_user => "db-user"
    jdbc_password => "db-pwd"
    jdbc_validate_connection => true
    jdbc_driver_library => "C:\path\to\ojdbc6.jar"
    jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
    statement => "SELECT * from my-table"
}
}
output {
elasticsearch {
index => "contacts"
document_type => "contact"
document_id => "%{uid}"
hosts => "localhost:9200"
}
}

Along the way, I encountered two error messages that are the reason for this post:
The first

” Error: java::oracle.jdbc.driver.OracleDriver not loaded. Are you sure you’ve included the correct jdbc driver in :jdbc_driver_library?
You may be interested in the ‘–configtest’ flag which you can use to validate logstash’s configuration before you choose to restart a running system. ”

Running the query with the ‘–configtest’ flag did not help much but eventually, it turned out to be a simple typo. The oracle driver was not loaded only because the call was
” java::oracle.jdbc.driver.OracleDriver ” instead of ” Java::oracle.jdbc.driver.OracleDriver “.
(this one was really annoying but I hope this will save you time I spent on it ).

Another message I got along the way was

Java::JavaSql::SQLRecoverableException: IO Error: Invalid connection string fo
rmat, a valid format is: “host:port:sid”

apparently, while all examples for the connection string are in the following format:
jdbc:mysql://localhost:3306/mydb
The valid format in my case (oracle client 11 on windows 7) was
jdbc:oracle:thin:@\\my-host:1521:my-sid

This should get you connected but if you still get

Java::JavaSql::SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Check the following:
The database and the listener are running.
The specified port in the code is the same port that the database is listening .
There is no firewall blocking
Good luck

Can you move a table between schemas ?

Many reasons can raise the need to change a table owner or to move a table from one user (schema) to another . regardless of reason, one would expect that Oracle will provide an easy way to move a table to a different schema.

well, the short answer is NO . there is no direct method to change the table owner.

If you try something like

RENAME table_name to new_schema.table_name ;

Oracle Will return ORA-01765: specifying table’s owner name is not allowed
While you can workaround this error by omitting the schema name. However, in our case, this is exactly what we want to do so it is still a problem.

Similarly, the ALTER TABLE command will not do the work as well.

ALTER TABLE source_schema.table_name RENAME TO new_schema.table_name;

While a different error message (ORA-14047: ALTER TABLE|INDEX RENAME May Not Be Combined With Other Operations. ) is returned when trying to use the alter table command to move the table to a different schema, the result is the same.

Another option to move a table to a different user is to simply copy the table using

create table new_schema.table_name as select  * from  source_schema.table_name;

but this will generate a lot of I/O , redo logs and might take a lot of time so it can be a solution only for small tables.
Likewise, Using export and import will require downtime.

While there is no direct method to move a table to a different schema, you can use
One of the less known advantages of partitioning to do the task. the ability to exchange partitions between tables across schema can help us achieve the goal.
Following is the process to move a table from one schema to another:
1) Create a partitioned copy of the table – you do this by using

select dbms_metadata.get_ddl('TABLE','TABLE2MOVE') from dual; 

if you want just the plain DDL without all the storage definition etc. you can run

     begin
       dbms_metadata.set_transform_param dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
      end;  

remember to add a partition clause at the end of the ddl

 CREATE TABLE "NEW_SCHEMA"."TABLE2MOVE" 
   (	"ID" NUMBER, 
	"TEXT" VARCHAR2(100)
   ) 
   partition by hash (id) (partition p1);

2) Exchange the partition between the source and the target table

alter table NEW_SCHEMA.table2move exchange partition p1 with table SOURCE_SCHEMA.table2move;

3) That’s it! verify it by selecting from the source and target tables

         select * from NEW_SCHEMA.table2move;
         select * from  SOURCE_SCHEMA.table2move;
         

One thing to remember is that partitioning is an extra cost option on top of the enterprise edition license. so, if you get ORA-00439: feature not enabled: Partitioning. the option is not enabled. verify it with

select * from v$option;