My highlights from DOAG 2015...

Last week I was in Nuremburg at the excellent annual German Oracle user group conference - DOAG15. This was my first visit to DOAG and I was amazed at the size of the event. It is huge but most importantly it is very well organized and definitely one of the best, possibly the best, user conference that I have attended. 

…and then there was the chance to drink some gluhwein with the database PM team on the way to the speakers dinner.



There were a lot of Database product managers in attendance so it was great chance to catch up with my colleagues from HQ. This, combined with lots of Oracle Ace Directors made for lots of great sessions.Quite surprisingly there was a great spread of English speaking sessions and German sessions with translation. I even attempted a few German sessions that did not offer translation services. Honestly, if you are based in Europe and speak either German or English and you are looking for a conference that offers deep technical content along with product road maps about where Oracle is heading then this conference should be top of your list!

What were the highlights? Well there were so many it is hard to pick the best ones but there goes….

Obviously I have to start with my session: Using Analytical SQL to Intelligently Explore Big Data

This session is aimed at both developers and DBAs who are working on the next generation of data warehouse ecosystems. Working in these new big data based ecosystems can be very challenging. What if you could quickly and easily apply sophisticated analytical functions to any data whether it s in Hadoop/NoSQL sources or an Oracle Database instance using a simple declarative language you already know and love? Welcome back to SQL! This presentation will discuss the next generation of SQL with the latest release of the Oracle database and how you can take advantage of its analytical power for any kind of data source, inside or outside the database. Come and be amazed at how much SQL can really do to help you deliver richer big data analytics.

More information, including a link to the slides, see here:


Hermann Baer’s session on Oracle Partitioning - and everybody thought it no longer works

Oracle Partitioning is arguably one of the most frequently used database tools, especially for systems with a large amount of data in or beyond the multi-terabyte range. After more than two decades of development (Oracle Partitioning first became available in Oracle 8.0 in 1997) it may appear logical to think that the development of Partitioning will at least slow down at some point. Think again. The features available in the newest version of the Oracle database include numerous new tools for all aspects of data management and data access.

Attend this lecture and learn first-hand about the latest news on Oracle Partitioning from Oracle Development.

This session covered some of the new features coming in the next release of Oracle Database 12c so Hermann’s slides are not available for download. In the meantime, make sure you know all about the new partitioning features that are in the current release of Database 12c, see our partitioning home page on OTN:


 Jean-Pierre Dijcks’s two session on 1) Big Data SQL New Features and Roadmap

The big data world moves fast. Big Data SQL at the time of the DOAG conference it will have gone through some major versions. The latest generation of Big Data SQL promises a major performance upgrade for Oracle SQL on Hadoop and NoSQL Databases. We will go into the major new features, like Storage Indexes (and more secret stuff!) that are added over the last six months, and explain their benefits, the technology underpinnings and the performance benefits.

More information, including a link to the slides, see here:

followed by 2) How to choose between Hadoop, NoSQL or Oracle Database

With todays hype it is almost impossible to quickly see what the right tool is for the job. Should you use Hadoop for all your data? How do you ensure you scale out a reader farm? When do you use Oracle Database? Do you use all of them, if so - why?

This session clears up (most of) the confusion. We will discuss three simple criteria performance, security and cost to quickly get to the bottom of which technology you should use when. Note that this is not a product session trying to sell you a technology; the session goes to the bottom of the technologies, the core of its existence to explain why some types of workloads cannot be solved with some technologies. Also note that this is not about saying bad things about technologies. Again, we are going to basics laws of physics and explain when to use what... Why come to this session: Understand the basics of Big Data SQL and its use in big data architectures (including licensing) See how the new features impact your ability to run more queries faster and understand how to leverage these in your big data environment . See the inner workings of some of the latest functionality from under the hood perspective

More information, including a link to the slides, see here:

Jean-Pierre Dijcks


Bryn Llewellyns two sessions: 1)Transforming one table to another: SQL or PL/SQL?

You ’ve heard the mantra If you can do in SQL, do so; only if you can t, do it PL/SQL . Really? Always? What if you want to compute non-overlapping RowID ranges that completely span a table? An exact solution is trivial; but it s too slow for a table of typical size. Approximate solutions, using only SQL, are posted on the Internet. But they impose limitations, and leave generalization to the reader -- and there's more to the generalization than meets the eye.

This session examines some alternative general solutions. Some use only SQL; some combine SQL and PL/SQL. All pass the same correctness tests; all have comparable performance. Choosing one therefore reflects some rather subtle, subjective considerations. I use the vehicle of making the choice to discuss issues of best practice both SQL versus PL/SQL and within PL/SQL. You can guess what I d prefer. Your choice might be different. And I will welcome debate.

More information, including a link to the slides, see here:

Bryn Llewellyn

Bryn was very kind to point out that I still owe him a version of his conversion process based on the MATCH_RECOGNIZE clause. It’s on my todo list (quite near the top) but it is all a question of finding the time.

 and 2) Why use PL/SQL? (mit anschlie├čenden Treffen der DOAG PL/SQL Community)

The success of any serious software system depends critically on adhering to the principles of modular decomposition principles that are as old as computer science itself. A module encapsulates a coherent set of functionality, exposing it by using an interface that exactly maps the functions it supports, and scrupulously hiding all implementation details. No-one would dream of challenging these notions unless one of the major modules is a relational database!

The structure of the tables, the rules that constrain their rows, and the SQL statements that read and change the rows are the implementation details. And Oracle Database provides PL/SQL subprograms to express the interface.

Yet many customers avoid stored procedures. They claim that PL/SQL in the database can't be patched without unacceptable downtime; that the datatypes for passing row sets between the client-side code and the database PL/SQL are cumbersome to use and bring performance problems; and that it's impractical to give each developer a private sandbox within which to make changes to database PL/SQL. These notions are myths; they are no less defensible than the myth that technology whose name doesn't start with "J" or "No" is too old-fashioned to be useful.

Customers who do hide their SQL behind a PL/SQL interface are happy with the correctness, maintainability, security, and performance of their applications. Ironically, customers who follow the NoPlsql religion suffer in each of these areas. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics. I hope that skeptics who attend will experience an epiphany.

 More information, including a link to the slides, see here:

Andy Rivenes’s  session on Best Practices With Oracle Database In-Memory

Oracle Database In-Memory is one of the most fascinating new database features in a long time, promising an incredible performance boast for analytics. This presentation provides a step by step guide on when and where you should take advantage of Database In-Memory, as well as outlining strategies to help you ensure you get the promised performance boast regardless of your database environments. These strategies include how to identify which objects to populate into memory, how to identify which indexes to drop, and details on how to integrate with other performance enhancing features . With easy-to-follow real-world examples, this presentation explains exactly what steps you need to take to get up and running on Oracle Database In-Memory.

More information, including a link to the slides, see here:


Kim Berg Hansen’s session on Analytic Functions 101. Kim’s session provided my favorite quote of the week - “You just can’t live without Analytical SQL functions”.

SQL gives you rows of detail, aggregate functions gives you summarized data. To solve your requirements you often need both at the same time. Or you need in one row to access data from or compare with one or more other rows. That can be messy with plain SQL often requiring accessing the same data multiple times.

Analytic functions can help you to make your SQL do many such things efficiently, that you otherwise might have been tempted to do in slow procedural loops. Properly applied analytic functions can speed up the tasks (for which they are appropriate) manyfold.

In this session I'll cover the syntax as well as little quirks to remember, all demonstrated with short and clear examples.

More information, including a link to the slides, see here:

Kim Berg Hansen




Alex Nuijten’s session on SQL Model Clause: A Gentle Introduction

The Model Clause adds spreadsheet-like functionality to the SQL language. Very powerful stuff, yet the syntax can be quite daunting. With the Model Clause, you define a multidimensional array from query results and then apply formulas to this array to calculate new values. This presentation will give you a gentle introduction in the complex syntax of the Model Clause and will show you some real life examples where the Model Clause was invaluable.

More information, including a link to the slides, see here: Who would have thought that you use the Model clause to create fractals but then with SQL almost anything is possible:

Using Analytical Functions to create fractals



Andrej Pashchenko’s session on 12c SQL Pattern Matching - when will I use it?

The SQL Pattern Matching clause (MATCH_RECOGNIZE) introduced in 12c is often associated with Big Data, complex event processing, etc. Should SQL developers that have not (yet) faced similar tasks ignore this new feature? Absolutely not! The SQL extension is powerful enough to be able to solve many common SQL tasks in this new, simpler and more efficient way. The lecture will provide an insight into the new SQL syntax and present a few examples that will give the developers a foretaste of what is to come.

More information, including a link to the slides, see here: This session was in German but the slides are a great introduction to pattern matching.

Andrej Pashchenko




Kellyn PotVin-Gorman’s session on The Power of the AWR Warehouse and Beyond:

The Enterprise Manager 12c Release 4 is accompanied with the new AWR Warehouse feature, which allows you to automatically extract detailed performance data from your database targets and consolidate the data into a single location within the AWR Warehouse repository. The main source of data stored in the AWR Warehouse is derived from the Automatic Workload Repository (AWR), via AWR snapshots, letting you retain long-term history of AWR data from chosen EM12c database targets, enabling full historical analysis of AWR data across databases without impacting the source or user database.

This session will cover the power behind this incredible new feature, a full review of the architecture, ETL process and then onto what everyone who's worked with AWR data really wants to know: What is the impact to the existing suite of AWR queries and how much can it really provide the business and the DBA when searching for answers.

More information, including a link to the slides, see here:


Chris Saxon’s session on Edition Based Redefinition for Zero Downtime PL/SQL Changes:

Releasing code changes to PL/SQL in heavily executed packages on databases with high uptime requirements brings many challenges. It s not possible to gain the DDL lock to compile a package while it s being executed, so how do you do so without downtime?

In 11gR2, Oracle introduced edition-based redefinition (EBR). This allows you to deploy multiple versions of PL/SQL objects in the same database. Using EBR, you can release PL/SQL easily without downtime.

This talk discusses earlier methods to tackling this problem and shows how EBR is superior. It steps through the process of creating and using editions with a live demo. It also investigates some of the downsides to using EBR and techniques you can use to mitigate these.

More information, including a link to the slides, see here:


Christo Kutrovsky’s session on Maximize Data Warehouse Performance with Parallel Queries

Oracle Data Warehouses are typically deployed on servers with very large number of cores, and increasingly on RAC. Making efficient use of all available cores when processing data warehouse workloads is therefore critical in achieving maximal performance. To make efficient use of all cores in a data warehouse system, skilled use of parallel queries is key.

In this presentation we'll look at how to effectively use parallel queries and how to avoid common mistakes that often cripple data warehouse performance. You will learn all about how parallel queries work, which join methods to use with them, how to avoid expensive use of temp space and how to leverage all nodes efficiently by optimizing data flow. This is a practical session, and you will leave with several techniques you can immediately apply.

More information, including a link to the slides, see here:


and that was DOAG 2015. An incredible three days in Nuremburg. Met lots of wonderful people and definitely looking forward to DOAG 2016. If you are only planning to attend one conference in Europe next year then I would definitely recommend DOAG2016. It will be around the same time next year, towards the end of November, so start making plans. The DOAG website is here:

Thanks to everyone at DOAG for a great conference and see you next year.

Technorati Tags: , , , , , ,


Popular posts from this blog

My query just got faster - brief introduction to 12.2 in-memory cursor duration temp tables

Dealing with very very long string lists using Database 12.2

Ultimate, comprehensive review for big data warehousing for #OOW18.