You are viewing boneist_oracle

boneist_oracle
22 October 2011 @ 11:40 pm
Please repoint your rss feeds etc to: http://www.orchestrapit.co.uk

Thanks *{:-)
 
 
 
boneist_oracle
28 April 2011 @ 03:31 pm
This blog has moved - you can find this entry copied over here

As the title of this blog entry indicates, I encountered a problem yesterday with my CLOBs somehow ending up being truncated. I scratched my head, poured over the documentation and still came up blank.

I felt sure the problem was something to do with the multibyte character set my database (10.2.0.4) was set up with, but everything in the documentation indicated that DBMS_LOB subprograms worked in terms of characters, not bytes, when CLOBs were involved.

I managed to recreate my issue:

declare
    l_finalSQL VARCHAR2(32767 char);
    l_innerSQL CLOB := 'a';
    l_outerSQL CLOB := 'b';
    l_userid   INT;
    gOutersql varchar2(113) := 'cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc'||
                               'ccccccccccccccccccccccccccccccccccccccccccccccccccccc';
begin
  for i in 1..1342
  loop
    l_outersql := l_outersql||'b';
  end loop;

  dbms_output.put_line('l_outersql = '||length(l_outersql));

  for i in 1..10860
  loop
    l_innersql := l_innersql||'a';
  end loop;

  dbms_output.put_line('l_innersql = '||length(l_innersql));

  l_userid := 123;

  dbms_output.put_line('l_userid = '||length(l_userid));

  l_outerSQL := l_outerSQL || ' FROM (' || l_innerSQL || gOUTERSQL || TO_CHAR(l_userid);

  dbms_output.put_line('appended l_outersql = '||dbms_lob.getlength(l_outersql));

  IF dbms_lob.getlength(l_outerSQL) <= 32767 THEN
    l_finalSQL := dbms_lob.substr(l_outerSQL, 32767, 1);
  else raise too_many_rows;
  end if;

  dbms_output.put_line('l_finalsql = '||length(l_finalsql));
end;
/


which produced the following output:

l_outersql = 1343
l_innersql = 10861
l_userid = 3
appended l_outersql = 12327
l_finalsql = 10922


Note the 12327 vs 10922 - the expected and actual lengths of the l_finalSQL string.

I could not spot what the problem was for the life of me, so I asked for help on my work's Oracle chat channel (don't think I didn't have a chuckle about that given my previous blog entry!). Thankfully, someone else had the presence of mind to check My Oracle Support, and it turned out that I have most likely tripped over bug 9020537. This isn't fixed until 12.1 (!) or 11.2.0.2 (Server Patch Set) or 10.2.0.5.3 (Patch Set Update).

Unfortunately, upgrading to any of these versions is not an option for us in the timescale I needed my code to be working, and there is no workaround (boo!) so I've had to rewrite my code to work around the issue. Annoying! (Even more so when what I'm doing is to work around the fact that EXECUTE IMMEDIATE doesn't handle CLOBs until 11g...)
 
 
 
boneist_oracle
26 April 2011 @ 03:55 pm
This blog has moved - you can find this entry copied over here

When I first started working with Oracle, as a lowly system tester/support person, I had no idea there was such a thing as an Oracle community, beyond the developers and DBAs that I had contact with. In fact, it wasn't until I started my current job (4 years ago) that I realised there was even a hint of a community! That's probably a combination of how I became an Oracle developer and the structure of the company I worked for previously.

I consider myself a part of several Oracle communities:


  • OTN Forums

  • Work chat channels

  • the informal London pub Oracle massive*

  • the Oracle Blogging community

  • Twitter



and each one adds to my knowledge in different ways. I count myself very, very lucky that I work at a company which encourages communities via the IM chat program that we use - there are lots of channels dedicated to various areas, not just Oracle-based (eg. Unix, java, c# ....).

I think these channels are not only a vital source of help and knowledge, they're a great way of networking across the company. I now have a little network of like-minded DBAs and devs, spanning continents, that I can be cheeky and ask for help from (and vice versa, of course!), which has been very helpful when I've been stuck, etc!

Pooling knowledge and forging ties is at the heart of every community and by participating, you can't help but continue to keep learning new things or new ways to apply things you already know.

I would definitely encourage new (and old!) Oracle developers/DBAs to participate in at least one Oracle community; it'll be well worth their while to do so!

* OMG! I sat next to Jonathan Lewis the last time there was a pub outing. O. M. G!! (This is not me name dropping, unlike a certain Doug I know *{;-) - this is me being totally amazed at the company I found myself in, even though I've met JL several times before now! Someone pinch me....)
 
 
 
boneist_oracle
21 April 2011 @ 01:29 pm
This blog has moved - you can find this entry copied over here

Yesterday, I came across an extremely odd problem with a view that I had created. The problem I had was that I was joining the view to another table, based on the primary key of the main table in the view, yet it was doing a FTS of that table and then doing the filtering.

This made the query take ~18 minutes, yet I was convinced that it ought to be able to push the filtering inside the view. Nothing I tried would get the filtering to take place in the view, not push_pred or no_merge or no_unnest hint; nothing.

It was actually a conversation with Doug Burns about the trace file I generated that spat out a clue - thanks Doug!

I've managed to recreate the issue, so here goes:

create table t1 as 
select level col1,
       dbms_random.string('A', 5) col2,
       mod(level, 20) col3
from   dual
connect by level <= 10000;

alter table t1 add constraint t1_pk primary key (col1);

create table t2 as
select level col1,
       ceil(level/2) col2,
       dbms_random.string('A', 5) col3
from   dual
connect by level <= 20000;

alter table t2 add constraint t2_pk primary key (col1);

alter table t2 add constraint t2_t1_fk foreign key (col2) references t1 (col1);

create table t3 as
select level col1,
       level * 2 col2
from   dual
connect by level <= 19;

alter table t3 add constraint t3_pk primary key (col1);

begin
  dbms_stats.gather_table_stats(user, 'T1');
  dbms_stats.gather_table_stats(user, 'T2');
  dbms_stats.gather_table_stats(user, 'T3');
end;

create view t1_t3_view
as
with t3a as (select col1,
                    decode(col2, 4, 90, 8, 45, col2) col2
             from   t3)
select t1.col1,
       t1.col2,
       t1.col3,
       t3a.col2 t3_col2
from   t1,
       t3a
where  t1.col3 = t3a.col1 (+);


The following query does push the filtering into the view (see the use of the T1_PK index):

select *
from   t1_t3_view
where  col1 in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
SQL_ID  ca9pw03r3tua1, child number 0
-------------------------------------
select * from   t1_t3_view where  col1 in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Plan hash value: 1448310819             
---------------------------------------------------------------------------------------------
| Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |            |       |       |    10 (100)|          |
|   1 |  VIEW                          | T1_T3_VIEW |    10 | 20410 |    10   (0)| 00:00:01 |
|*  2 |   HASH JOIN OUTER              |            |    10 |   180 |    10   (0)| 00:00:01 |
|   3 |    INLIST ITERATOR             |            |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID| T1         |    10 |   120 |     7   (0)| 00:00:01 |
|*  5 |      INDEX UNIQUE SCAN         | T1_PK      |    10 |       |     6   (0)| 00:00:01 |
|   6 |    TABLE ACCESS FULL           | T3         |    19 |   114 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
   2 - access("T1"."COL3"="COL1")
   5 - access(("T1"."COL1"=1 OR "T1"."COL1"=2 OR "T1"."COL1"=3 OR "T1"."COL1"=4 OR
              "T1"."COL1"=5 OR "T1"."COL1"=6 OR "T1"."COL1"=7 OR "T1"."COL1"=8 OR "T1"."COL1"=9 OR
              "T1"."COL1"=10)


whereas the following query does not (note the FTS on T1):

select *
from   t1_t3_view
where  col1 in (select col2 from t2
                where col1 <= 10);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
SQL_ID  95rxxkfy26rg3, child number 0
-------------------------------------
select * from   t1_t3_view where  col1 in (select col2 from t2
         where col1 <= 10)
Plan hash value: 637444478
-------------------------------------------------------------------------------------------
| Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |            |       |       |    15 (100)|          |
|*  1 |  HASH JOIN RIGHT SEMI        |            |    10 | 20500 |    15   (7)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T2         |    10 |    90 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T2_PK      |    10 |       |     2   (0)| 00:00:01 |
|   4 |   VIEW                       | T1_T3_VIEW | 10000 |    19M|    12   (9)| 00:00:01 |
|*  5 |    HASH JOIN RIGHT OUTER     |            | 10000 |   175K|    12   (9)| 00:00:01 |
|   6 |     TABLE ACCESS FULL        | T3         |    19 |   114 |     3   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL        | T1         | 10000 |   117K|     8   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
   1 - access("COL1"="COL2")
   3 - access("COL1"<=10)
   5 - access("T1"."COL3"="COL1")


After much scratching of my head (and not a small amount of swearing!) I eventually worked out how to do a 10053 trace on the query. The following is what Doug picked up on (and to be fair, so had I, only I hadn't got round to doing anything about it aside from making a mental note of it):

**************************
Predicate Move-Around (PM)
**************************
PM: Considering predicate move-around in SEL$1 (#0).
PM:   Checking validity of predicate move-around in SEL$1 (#0).
CBQT: copy not possible because linked to with clause
 in SEL$3 (#0)
CBQT: copy not possible because view
 in SEL$1 (#0)
CBQT bypassed for SEL$1 (#0): Cannot copy query block.
CBQT: Validity checks failed for 95rxxkfy26rg3.


If I change the view so that the subquery is an in-line view, bingo!:

create view t1_t3_view2
as
select t1.col1,
       t1.col2,
       t1.col3,
       t3a.col2 t3_col2
from   t1,
       (select col1,
               decode(col2, 4, 90, 8, 45, col2) col2
        from   t3) t3a
where  t1.col3 = t3a.col1 (+);

PLAN_TABLE_OUTPUT                             
----------------------------------------------------------------------------------------------
SQL_ID  9jj0atfw7050d, child number 0
-------------------------------------
select * from   t1_t3_view2 where  col1 in (select col2 from t2
where col1 <= 10)
Plan hash value: 2918082440
----------------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |       |       |       |    13 (100)|          |
|*  1 |  HASH JOIN OUTER               |       |    10 |   270 |    13  (16)| 00:00:01 |
|   2 |   NESTED LOOPS                 |       |    10 |   210 |     9  (12)| 00:00:01 |
|   3 |    SORT UNIQUE                 |       |    10 |    90 |     3   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T2    |    10 |    90 |     3   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN          | T2_PK |    10 |       |     2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID | T1    |     1 |    12 |     1   (0)| 00:00:01 |
|*  7 |     INDEX UNIQUE SCAN          | T1_PK |     1 |       |     0   (0)|          |
|   8 |   TABLE ACCESS FULL            | T3    |    19 |   114 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."COL3"="COL1")
   5 - access("COL1"<=10)
   7 - access("T1"."COL1"="COL2")


So one place where a WITH clause may cause problems and, much as I love it, I will now steer clear of WITH clauses in views until I know the issue has been fixed! (Not that I work with views that often; I'm only doing so here due to the fact we're on 10g, there's a join I need to add into a dynamic sql and the 32767 character limit EXECUTE IMMEDIATE has. I can't wait for 11g where EXECUTE IMMEDIATE works with CLOBs!)
 
 
 
boneist_oracle
15 April 2011 @ 07:23 pm
This blog has moved - you can find this entry copied over here

Judging from the amount of forum posts around the subject of dates, you would dates in Oracle are difficult, confusing and just plain awkward to use.

However, far from being difficult, I think dates in Oracle are really, really easy.

The DATE datatype in Oracle holds date and time (to the nearest second) information and stores it in its own internal format. Everyone knows that date+time consists of year, month, day, hours, minutes and seconds, so Oracle gives you a method of allowing you to specify which bits are the year, which are the month, etc. What could be simpler than that?

It's the TO_DATE() function that allows you to pass in a string and have Oracle read it as a date. Similarly, it's the TO_CHAR() function that allows you to take an Oracle DATE value and convert it back into a string that you and I can understand.

Let's look at an example:

to_date('01/01/2011', 'dd/mm/yyyy')
to_char(sysdate, 'dd Month yyyy hh24:mi')


Granted, it's a wee bit of typing, but it's not difficult. The list of available format masks can be found here. It's worth nothing that you can also use these format masks with TRUNC and ROUND, for example if you need to find the quarter start date of a given date etc.

People abuse dates in many ways in Oracle, such as only using 2-digit years (why, I'm not sure. It's as if Y2K never happened!) or worse, when they want to compare dates, they first convert them to strings.

This is a bad idea in many ways - for one, if you don't organise your date-as-a-string correctly, comparisons are meaningless - eg. '2 Feb 1999' is greater than '1 Jan 2011', because '1' < '2' in the string comparison. For another, when you convert the DATE into a string (or number), you remove vital information from the optimizer. This can lead to incorrect cardinality estimates which could throw your execution path off.

Why? Well, how many days are there between 1st January 2011 and 31st December 2010? Of course, the answer is 1. But if you were converting those to numbers, the question becomes "What is the difference between 20110101 and 20101231" to which the answer is, of course, 8870. 1 vs 8870 - that's a fair amount of difference! By converting dates to strings or numbers, you leave the optimizer no option than to do the equivalent of sticking its finger in the air to guess how many rows it's going to retrieve!

Don't be afraid of Oracle's DATE format, it's simple to use and can help you do some very powerful things without much work from you - that's got to be a bonus! (Grouping results into quarters? No problem! Pivoting by month? Not a problem either! etc etc)
 
 
 
boneist_oracle
This blog has moved - you can find this entry copied over here

Tabibitosan is a technique that I picked up from, guess where? Yup, the OTN Forums! It was Aketi Jyuuzou who first introduced me to the idea, and it's a very useful technique for a certain set of problems. Aketi's forum post has lots of information on it, but I just wanted to demonstrate a simple example here, to give you a taste of what you can use it for.

So, suppose we have a set of data, which is unique for id and dt:

        ID DT
---------- ----------
         1 01/01/2011
         1 02/01/2011
         1 03/01/2011
         1 04/01/2011
         1 05/01/2011
         1 10/01/2011
         1 11/01/2011
         1 12/01/2011
         1 14/01/2011
         1 15/01/2011
         1 16/01/2011
         2 01/01/2011
         2 02/01/2011
         2 03/01/2011
         2 04/01/2011
         2 06/01/2011
         2 07/01/2011
         2 11/01/2011
         2 12/01/2011
         2 13/01/2011
         2 28/01/2011
         2 30/01/2011
         2 31/01/2011
         2 01/02/2011


and you want to group the results to find the start and end dates of consecutive rows, eg:

        ID MIN_DT     MAX_DT    
---------- ---------- ----------
         1 01/01/2011 05/01/2011
         1 10/01/2011 12/01/2011
         1 14/01/2011 16/01/2011
         2 01/01/2011 04/01/2011
         2 06/01/2011 07/01/2011
         2 11/01/2011 13/01/2011
         2 28/01/2011 28/01/2011
         2 30/01/2011 01/02/2011


Where do we start? Well, the tabibitosan works by assigning a number to each of the rows in the resultset, either over the whole set or over the partitioned result sets - in our case, we'll be doing this per each id. Then it assigns a different, consecutive number to the rows.

Because we're using dates, we'll label each row by converting those to a number in Julian format. We'll also use the row_number() analytic function to label each row with a consecutive number for each id in ascending dt order. Finally, we'll subtract one from the other:

with sample_data as (select 1 id, to_date('01/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('02/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('03/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('04/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('05/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('10/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('11/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('12/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('14/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('15/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('16/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('01/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('02/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('03/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('04/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('06/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('07/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('11/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('12/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('13/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('28/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('30/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('31/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('01/02/2011', 'dd/mm/yyyy') dt from dual)
select id,
       dt,
       to_number(to_char(dt, 'j')) main_rn,
       row_number() over (partition by id order by dt) partitioned_rn,
       to_number(to_char(dt, 'j'))
         - row_number() over (partition by id order by dt) grp
from   sample_data;

        ID TO_CHAR(DT,'DD/MM/YYYY')    MAIN_RN PARTITIONED_RN        GRP
---------- ------------------------ ---------- -------------- ----------
         1 01/01/2011                  2455563              1    2455562
         1 02/01/2011                  2455564              2    2455562
         1 03/01/2011                  2455565              3    2455562
         1 04/01/2011                  2455566              4    2455562
         1 05/01/2011                  2455567              5    2455562
         1 10/01/2011                  2455572              6    2455566
         1 11/01/2011                  2455573              7    2455566
         1 12/01/2011                  2455574              8    2455566
         1 14/01/2011                  2455576              9    2455567
         1 15/01/2011                  2455577             10    2455567
         1 16/01/2011                  2455578             11    2455567
         2 01/01/2011                  2455563              1    2455562
         2 02/01/2011                  2455564              2    2455562
         2 03/01/2011                  2455565              3    2455562
         2 04/01/2011                  2455566              4    2455562
         2 06/01/2011                  2455568              5    2455563
         2 07/01/2011                  2455569              6    2455563
         2 11/01/2011                  2455573              7    2455566
         2 12/01/2011                  2455574              8    2455566
         2 13/01/2011                  2455575              9    2455566
         2 28/01/2011                  2455590             10    2455580
         2 30/01/2011                  2455592             11    2455581
         2 31/01/2011                  2455593             12    2455581
         2 01/02/2011                  2455594             13    2455581


You can see that we now have the same number (the grp column) for rows with consecutive dt's. Each group has a different grp value per each id. This is because as each dt increases by 1, so does the row number. The difference between the two values of that row and the previous row remains the same. As soon as the dt jumps by more than 1, the difference becomes greater, as the row number will only ever increase by 1.

Having identified the numbers that separate the rows into consecutive chunks, we can group the result set using them, to find the min and max dt's for each set:

with sample_data as (select 1 id, to_date('01/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('02/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('03/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('04/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('05/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('10/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('11/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('12/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('14/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('15/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 1 id, to_date('16/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('01/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('02/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('03/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('04/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('06/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('07/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('11/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('12/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('13/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('28/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('30/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('31/01/2011', 'dd/mm/yyyy') dt from dual union all
                     select 2 id, to_date('01/02/2011', 'dd/mm/yyyy') dt from dual),
     tabibitosan as (select id,
                            dt,
                            to_number(to_char(dt, j))
                              - row_number() over (partition by id order by dt) grp
                     from   sample_data)
select id,
       min(dt) min_dt,
       max(dt) max_dt
from   tabibitosan
group by id, grp
order by id, grp;

        ID MIN_DT     MAX_DT    
---------- ---------- ----------
         1 01/01/2011 05/01/2011
         1 10/01/2011 12/01/2011
         1 14/01/2011 16/01/2011
         2 01/01/2011 04/01/2011
         2 06/01/2011 07/01/2011
         2 11/01/2011 13/01/2011
         2 28/01/2011 28/01/2011
         2 30/01/2011 01/02/2011


Neat, huh?

Whenever I see this kind of "grouping consecutive rows together" problem, the tabibitosan method immediately leaps to mind. Thanks Aketi!

Also, note how I've used both analytic functions and subquery factoring in this - both powerful tools in their own right, you can use them as building blocks to come up with queries that may look fairly complicated, but are easy to break down to see what each constituent part is doing. Pack a few tools in your SQL toolkit and you can easily combine them to build complex queries.
 
 
 
boneist_oracle
13 April 2011 @ 01:11 pm
This blog has moved - you can find this entry copied over here

... is it just me, or have any other OTN Forum regulars switched to "!=" directly as a result of the Jive's inability to display "<>"?
 
 
 
boneist_oracle
This blog has moved - you can find this entry copied over here

It's been a while since I wrote in my blog, sorry! (My hobby of cross-stitching took over when I was stitching a sampler for the birth of my first nephew and hasn't really stopped! I know, excuses excuses...)

Anyway, I went along to the UKOUG TEBS Conference 2010 and I really enjoyed it. Here is a brief summary of what I remember (which in itself will be a good test of how well a) I listened and b) the quality of the presentations!)

Day 0

I turned up in Birmingham on Sunday evening, which was later than I had planned, due to having to work on the Sunday (big migration project; I think I’ll blog more on this later!).

I eventually managed to finish working just by the deadline I set myself, and hopped on the train. Thankfully there was no snow where I live, so all the trains were still running! This was sadly not the case for lots of other attendees, who had to battle severe transport issues in order to make the journey.

Once I’d sorted myself out at the hotel, Lisa Dobson and Niall Litchfield came and collected me and took me to a pub somewhere in the Food/drink court area just over the canal from the ICC. I’d never properly met either Lisa or Niall before, but Twitter has a lot to answer for *{;-)

I was fairly shy and didn’t mingle at all, but fortunately I knew several people who came over to chat, so that was nice! What I like most about these sorts of events is that people are really friendly and happy to chat to anybody, even though they might not know them. Very inclusive, even though I was sort of gatecrashing! *{;-)

Day 1

I started off by attending Tom Kyte’s "What’s New in Oracle Database Application Development" session, which was interesting as always. Judging from the lack of notes I took, I must have known about most of the things he mentioned – thankfully, he’s an engaging speaker, so this was certainly not a problem!

He even seemed to have taken on board one of my complaints from last year – there was a lot less glaring red in his presentations (lots of bright red areas in last year’s slides did unpleasant things to my eyes!). Since I didn’t complain directly to him, I can only assume that I wasn’t the only one who had a problem, or he decided on a style change (probably the latter!).

Next came Marco Gralike’s "XML Indexing Strategies – Choosing the right index for the right job" session. I was interested in this one because we use XMLTYPE columns in one of the databases I’ve taken over the maintenance of in the past year, and I’m not too au fait with XML!

I didn’t really understand that much of the presentation, as a lot was geared towards 11g XMLTYPE options, plus my general ignorance of XML, but I did scribble down a lot of notes, and at least I’m aware that there are various options out there when we finally move to 11g (we only just moved from 9i to 10g on some of our databases!).

I have a feeling that this is the kind of thing that will slot into place the more I learn of the subject, and I may well have "Ahhhh, so *that’s* what Marco meant!" moments in time to come!

I had a choice to make with the next session – Graham Wood’s "Oracle 'Hidden' Features: Features You Probably Didn't Know About and are FREE" or Jonathan Lewis’s "The beginners' guide to becoming an expert". Tough decision, but eventually I went with Jonathan.

Again, I didn’t take many notes, which, if I recall correctly, was mainly because Jonathan had SQL heavy slides and was talking about various ways of improving performance.

At the end of Jonathan’s session (at least, I think it was this session!), Andrew Clarke (aka APC) somehow recognised me and introduced himself to me (he was behind me – I recall Rob van Wijk doing the same thing to me the previous year, coincidentally in the same hall! I’m not sure what that says about me…! *{;-) ). Andrew is someone I know from the OTN Forums, so it was a pleasure to be able to put a name to the face (although sadly, I’m not sure I’d recognise him if I met him again, stupid memory! *slaps side of head*).

After lunch, I went to see Alex Nuijten’s "Analytic Functions Revisited", which I thought was an excellent presentation, especially if you weren’t already aware of them. Seeing as Alex is another OTN Forum regular and I was feeling brave, I decided to hang around afterwards to introduce myself. I was glad I did; he’s a thoroughly decent chap *{:-D

My last session of the day was Cary Millsap’s "Oracle Extended SQL Trace Data for Developers and Performance Analysts". I managed to make copious notes for this one, which I found interesting and informative. I meant to go back to work and immediately make use of some of the tools that were mentioned, but, um, I still haven’t got round to doing that yet!

After hanging around for a while, I decided that I had to go back to my hotel as I wasn’t feeling well (bah, why must this be part of my conference experiences?!) so I sadly missed the UKOUG Pantomime, and missed seeing Doug Burn’s Movember shave off. I also had to log into work and catch up / fix some bugs.

Day 2

I woke up early on the Tuesday, and thankfully felt much better. I headed down to see Martin Widlake’s presentation, which had a start time of 8.45am. Sadly, Martin was late, and consequently was not at his best. I thought the presentation content was excellent, so it was a shame that Martin wasn’t able to present it at his best.

After Martin’s session (and a bit of commiseration), I went to see Tom Kyte’s "What do I really need to know when upgrading?" session. Seeing as I was right in the middle of a project to migrate from 9i Linux to 10g Solaris, I thought this would be useful. Unfortunately, Tom was discussing 11g, so it wasn’t ultimately useful for my project, but I did glean a lot of information that will be useful when we do eventually move to 11g.

After lunch, I headed to Jonathan Lewis’s "Co-operating with the database" presentation, where I was a little early. Whilst I was waiting around, I got a tap on the shoulder, which turned out to be the wife of an ex-colleague of mine. How on earth she recognised me, I have *no* idea! I couldn’t have done that if you paid me, but apparently she’d seen my photo on my facebook page, as her husband is a friend of mine there. Mucho bizarro!

Jonathan’s session was interesting and covered Bind variables, histograms, partitioned tables and adaptive cursor sharing, as well as examples. Most of which, I already knew, but he has a knack of explaining things very clearly.

After this, I headed off to see Robyn Sands’ presentation on Instrumentation. This was very clearly explained, and had information about some tools which, again, I meant to research but...

It was at the end of this session that Niall Litchfield spotted me – I was sat on my own a couple of rows back from the front of the stage so that I could see the slides properly – and he convinced me to join him and others at the front of the stage. I felt like I was surrounded by royalty, as I was now sat with what I would term "famous" (at least in the Oracle community) people. I know Niall will probably pshaw this, but that’s how I felt (and would still feel!).

The next session was Graham Wood’s "How to build and unscalable system", which took us through some of the common problems that cause unscalable systems. He was very clear and easy to listen to.

Next was Randalf Geist’s live troubleshooting session (part 1), which took us through some problems and what you would do to investigate/fix them. Sadly, I found this to be quite a "dry" presentation, and by the end of it, my brain was somewhat fried - conference overload!

Afterwards, Niall, Doug Burns, Martin Widlake and I headed down into Birmingham to a pub (quelle surpise, huh?!) where I think we bumped into someone whose name I can’t recall (Rob?). An entertaining evening was had, but sadly I had to leave early so that I could log into work (boo!!).

Day 3

My first session of the day was Mark McGill’s "Common Causes of SQL Performance Regression and their Remedies". Once again, this was something that I already knew quite a bit about, but I found it useful as it consolidated my knowledge and gave me a list of things to think about when I next come across a problem SQL statement

After that, I went to a very entertaining debate between Jonathan Lewis and Tony Hasler on "Does Oracle Ignore Hints?". Jonathan was arguing that no, Oracle doesn’t ignore hints and Tony was arguing the opposite.

I have to say that I was firmly on the side of Jonathan throughout the entire debate, but Tony did come up with some interesting situations for Jonathan to debunk. I thoroughly enjoyed the light hearted banter that took place throughout!

At lunchtime, I met up (finally!) with Ben "Munky" Burrell, another OTN forum regular. At some point during the proceedings, Alex N and Rob vW also came and joined us – I definitely think that we could set up our own "OTN SQL & PL/SQL forum" table at the next UKOUG Tebs conference and take on SQL and PL/SQL questions that people might have!

My final session of the day was Jason Arneil’s "Taking .uk all the way to 11: A migration war story". I think this was one of my favourite sessions of the conference – he took us through the steps taken to upgrade Nominet’s databases to 11g (testing, testing and yet more testing!), and told it in an entertaining style. I thought this was the perfect wind-down to the conference.


To summarise my experience of the 2010 UKOUG Tebs Conference:



  • I’d sorted out my glasses beforehand – I was better able to see the slides without straining my eyes this time!

  • I knew more people this time, which helped me feel more confident and enabled me to introduce myself to yet more people.

  • I made sure I didn’t overload myself with presentations – saw quite a few the first day, but less on the other two days.

  • I came prepared to learn a *lot*!

  • I discovered that I’m really recognisable to people who’ve never met me before.
  • I had fun!
 
 
 
boneist_oracle
22 April 2010 @ 10:17 am
This blog has moved - you can find this entry copied over here

@rnm1978 asked an interesting question over Twitter, along the lines of:

"My column has "YYYY / MM" in it - how can I amend the column so that the date is in "PMM-YY" format? eg. 'Here is some text 2010 / 04 and more text' becomes 'Here is some text P04-10 and more text'."

REGEXP_REPLACE to the rescue!

SQL> with mt as (select 'Thisismy string 2010 / 04 and there''s more over here' col1 from dual union all
  2              select 'Just plain vanilla string' col1 from dual union all
  3              select 'Have some 2009 / 02 more text here' col1 from dual)
  4  select col1, regexp_replace(col1, '([[:digit:]]{2})([[:digit:]]{2}) / ([[:digit:]]{2})',
  5                                    'P\3-\2') col1a
  6  from   mt;

COL1                                                    COL1A
------------------------------------------------------- --------------------------------------------------
Thisismy string 2010 / 04 and there's more over here    Thisismy string P04-10 and there's more over here
Just plain vanilla string                               Just plain vanilla string
Have some 2009 / 02 more text here                      Have some P02-09 more text here
 
 
 
boneist_oracle
26 March 2010 @ 04:21 pm
This blog has moved - you can find this entry copied over here

Back when I first became an Oracle developer some 5 years ago (that long? Wow, and I still feel like a newbie at times!), I read AskTom daily. It was pretty much my only online source of information (apart from Google and the documentation, of course!).

When I moved to my current job 3 years ago, I discovered people were writing blogs about Oracle. I started reading those, as well as starting to help out on the OTN Forums. My reading of AskTom declined sharply over the first couple of weeks or so I was here (sorry Tom!). Don't get me wrong, it's still a fabulous site and I refer to it often, I just don't sit and wait for the nuggets of information to appear as obsessively as I used to.

Of course, over that time I have absorbed a lot of Tom's teaching, not only through his site and blog, but also because a lot of my fellow forum contributers and regular bloggers are advocates of Tom!

Today, Doug said to me (in the midst of some Friday afternoon bantering), "By the way, you might think I'm joking, but you *reek* follower-of-Tom". I know he didn't mean it as a compliment (probably something about lack of originality or summat!), but I took it as one anyway.

I have my own thoughts and opinions, and - gasp - I don't always agree with Tom but, contrary to some people's opinion, the reason I seem to be so pro-Tom is because my experience has shown him to be correct time and time again. Mind you, Tom isn't the only one to have helped mould my views on Oracle, SQL and PL/SQL (eg. Jonathan Lewis, Doug, Billy Verreynne and BluShadow to name but a few) but it could perhaps be argued that he has the most catch-phrases *{;-)