Google Analytics

Tuesday, 12 January 2016

BIWA 2016 - here's my list of must-attend sessions and labs

It’s almost here - the 2016 BIWA conference at the Oracle Conference Center. The conference starts on January 26 with a welcome by the conference leaders at 8:30am. The BIWA summit is dedicated to providing all the very latest information and best practices for data warehousing, big data, spatial analytics and BI. This year the conference has expanded to include the most important query language on the planet: SQL. There will be a whole track dedicated to YesSQL! The full agenda is available here

Unfortunately I won’t be able to attend this year’s conference but if I was going to be there, then this would be my list of must-attend sessions and hands-on labs.

 

Tuesday January 26

What’s New with Spatial and Graph? Technologies to Better Understand Complex Relationships
Time: 10:15 AM - 11:05 AM
Spatial and Graph analysis is about understanding relationships. As applications and infrastructure evolve, as new technologies and platforms emerge, we find new ways to incorporate and exploit social and location information into business and analytic workflows, creating a new class of applications. The emergence of Internet of Things, Cloud services, mobile tracking, social media, and real time systems creates a new landscape for analytic applications and operational systems. It changes the very nature of what we expect from devices and systems. New platforms and capabilities combine to become business as usual.

This session will discuss advances, trends, and directions in the technology, infrastructure, data, and applications affecting the use of spatial and graph analysis in current and emerging analytic and operational systems, and the new offerings Oracle has introduced to address these new developments. This includes new offerings for the Cloud, the NoSQL and Hadoop platform as well as a high-level overview of what is coming in Oracle Spatial and Graph 12.2.

Making SQL Great Again (SQL is Huuuuuuuuuuuuuuuge!)
Time: 11:20 AM - 12:10 PM
SQL is not giving an inch of ground in the fight against NoSQL. Oracle Corporation is fiercely battling on many fronts to reclaim lost ground. Andy Mendelsohn (Executive Vice President for Database Server Technologies), George Lumpkin (Vice President, Product Management), Bryn Llewellyn (Distinguished Product Manager), Steven Feuerstein (Developer Advocate), and Mohamed Zait (Architect) will explain Oracle’s strategy. Oracle ACE Director Kyle Hailey will moderate the discussion.

Is Oracle SQL the best language for Statistics
Time: 11:20 AM - 12:10 PM
Did you know that Oracle comes with over 280+ statistical functions? and that these statistical functions are available in all version of the Database? Most people do not seem to know this. When we hear about people performing statistical analytics we can hear them talking about Excel and R. But what if we could do statistical analysis in the database, without having to extract any data onto client machines. That would be really good and just think of the possible data security issues with using Excel, R and other tools.

This presentation will explore the various statistical areas available in SQL and I will give a number of demonstrations on some of the more commonly used statistical techniques. With the introduction of Big Data SQL we can now use these same 280+ Oracle statistical functions on all our data including Hadoop and NoSQL. We can also greatly expand our statistical capabilities by using Oracle R Enterprise using the capabilities embedded in SQL.

Getting to grips with SQL Pattern Matching
Time: 11:20
Time: 01:20 PM - 02:10 PM The growth in interest in big data has meant the need to quickly and efficiently analyze very large data sets is now a key part of many projects. This session will explore the new Database 12c SQL Pattern Matching feature. This is a very powerful feature, which can used to solve a wide variety of business and technical problems. Using a live demo we will compare and contrast pre-12c and 12c solutions for typical pattern matching business problems and consider some technical problems that can be solved very efficiently using MATCH_RECOGNIZE.

To help you understand the basic mechanics of the MATCH_RECOGNIZE clause the demo will look at the new MATCH_RECOGNIZE related keywords in the explain plan such as DETERMINISTIC, FINITE and AUTO. We will explore the key concepts of deterministic vs. non-deterministic state machines. Finally, we will review typical issues such as back-tracking.

How to choose between Hadoop, NoSQL or Oracle Database
Time: 01:20 PM - 02:10 PM With today’s hype it is impossible to quickly see what the right tool is for the job. Should you use Hadoop for all your data? How do ensure you scale out a reader farm? When do you use Oracle Database?

This session clears up (most of) the confusion. We will discuss simple criteria – performance, security and cost – to quickly get to the bottom of which technology you should use when. Note that this is not a product session trying to sell a technology; the session goes to the bottom of the technologies, the core of its existence to explain why some types of workloads cannot be solved with some technologies.

Why come to this session:

  • Get equipped to cut through the marketing and hype and understand the fundamentals of big data technologies
  • Ensure that you do not fall in the trap of wanting the next shiny thing without knowing exactly what you are getting
  • Derive an architecture to solve real big data business cases in your organization with the right technologies

 

Tuesday's Hands-on Labs

If you want to get some up, close and personal time with Oracle Advanced Analytics then make sure you attend Brendan Tierney and Charlie Berger's 2-hour hands on-lab. By the time you have finished the lab you will be a data mining expert!

Learn Predictive Analytics in 2 hours!! Oracle Data Miner 4.0 Hands on Lab
Time: 01:20 PM - 03:30 PM
Learn predictive analytics in 2 hours!! Multiple experts in Oracle Advanced Analytics/Oracle Data Mining will be on hand to help guide you through the basics of predictive analytics using Oracle Advanced Analytics 12c, a Database 12c Option and SQL Developer 4.1's Oracle Data Miner workflow GUI.

Oracle Advanced Analytics embeds powerful data mining algorithms in the SQL kernel of the Oracle Database for problems such as predicting customer behavior, anticipating churn, identifying up-sell and cross-sell, detecting anomalies and potential fraud, market basket analysis, customer profiling, text mining and retail market basket analysis. Oracle Data Miner 4.1, an extension to SQL Developer 4.1, enables business analysts to quickly analyze data and visualize their data, build, evaluate and apply predictive models and develop predictive analytics methodologies—all while keeping the data inside Oracle Database.

Oracle Data Miner's easy-to-use "drag and drop" workflow user interface enables data analysts to quickly build and evaluate predictive analytics models, apply them to new data in Oracle Database and then, most importantly, immediately generate SQL scripts to accelerate enterprise deployment. Come see how easily you can discover big insights from your Oracle data, generate SQL scripts for deployment and automation and deploy results into Oracle Business Intelligence (OBIEE) and other BI dashboards and applications.

Time to relax: at this point you can now relax as day one is almost over. There is just enough time left to enjoy the welcome reception event, sponsored by Deloitte.



Wednesday January 27

The Next Generation of the Oracle Optimizer
Time: 10:05 AM - 10:55 AM
This session introduces the latest enhancements from Oracle to make the statistics in the Oracle Optimizer in the Oracle Database more comprehensive, so you can expect Oracle Optimizer to give you better execution plans than ever before. Learn how Oracle has made it easier to maintain and be assured of comprehensive, accurate, and up-to-date statistics. These improvements allow database applications to get the most out of the enhancements made to Oracle Optimizer. The session uses real-world examples to illustrate these new and enhanced features, helping DBAs and developers understand how to best leverage them in the future and how they can be put to practical use today.

Taking Full Advantage of the PL/SQL Compiler
Time: 01:00 PM - 01:50 PM
The Oracle PL/SQL compiler automatically optimizes your code to run faster; makes available compile-time warnings to help you improve the quality of your code; and offers conditional compilation, which allows you to tell the compiler which code to include in compilation and which to ignore. This session takes a look at what the optimizer does for/to your code, demonstrates compile-time warnings and offers recommendations on how to leverage this feature, and offers examples of conditional compilation so you know what is possible with that feature.

Graph Databases: A Social Network Analysis Use Case
Time: 02:20 PM - 03:10 PM
Graph databases offer a scalable and high performance platform to model, explore, analyze, and link data for a wide range of applications. The first half of this session will provide a broad introduction to graph databases and how they are used to drive social network analysis, IoT, and linked data applications. The various graph technologies available from Oracle for Hadoop, NoSQL, and Database 12c will also be introduced. The second half will demonstrate the use of the Oracle Big Data Spatial and Graph product to perform “sentiment analysis” and “influencer detection” across a social network. In addition, techniques for combining social analysis with location analysis will be demonstrated to better understand how members of a social network are geographically organized and influenced. Finally, session participants will learn some of the advantages and best practices for undertaking social network analysis on a Big Data graph platform.

Fiserv Case Study: Using Oracle Advanced Analytics for Fraud Detection in Online Payments
Time: 03:25 PM - 04:15 PM
Session level: Intermediate Keywords: OAA/Oracle Data Mining, OAA/Oracle R Enterprise, Oracle Database, Oracle R Advanced Analytics for Hadoop Abstract (250 words or less): Fiserv manage risk for $30B+ in transfers, servicing 2,500+ US financial institutions, including 27 of the top 30 banks and prevents $200M in fraud losses every year. When dealing with potential fraud, reaction needs to be fast. Fiserv describes their use of Oracle Advanced Analytics for fraud prevention in online payments and shares their best practices and results from turning predictive models into actionable intelligence and next generation strategies for risk mitigation.

Keynote: Oracle Big Data: Strategy and Roadmap

Time: 05:10 PM - 06:00 PM
The main keynote at this year's conference will be presented by Neil Mendelson, Vice President Big Data Product Management, Oracle.



Implement storage tiering in Data warehouse with Oracle Automatic Data Optimization
Time: 04:30 PM - 05:00 PM
The rapid growth of amount of data in data warehouse posts ever-increasing challenges on performance as well as cost. High performance storage such as flash SSDs can significantly improve data warehouse performance, but It is very costly to store a very large volume of data in SSDs. One effective way to address these challenges is to store the data in different tiers of storage and also compress them based on business and performance needs. For this purpose, Oracle 12c introduced two new features Automatic Data Optimization(ADO) and Heat Map to implement storage tierring and data compression based on the data usage and predefined rules. This session shows how Data warehouse DBAs leverage these features to optimize the database storage for both performance and cost. A few examples will be shown as real life case studies.

Wednesday's Hands-on Labs

There is a whole series of must-attend hands-on labs today so make sure you find time to learn about all the latest greatest features in Oracle Advanced Analytics and Oracle Spatial:

Scaling R to New Heights with Oracle Database
Time: 09:00 AM - 10:55 AM
Oracle R Enterprise (ORE), a component of the Oracle Advanced Analytics Option, provides a comprehensive, database-centric environment for end-to-end analytical processes in R, with immediate deployment to production environments. Entire R scripts can be operationalized in production applications - eliminating the need to port R code. Using ORE with Oracle Database enables R users to transparently analyze and manipulate data in Oracle Database thereby eliminating memory constraints imposed by client R engines. In this hands-on lab, attendees will work with the latest Oracle R Enterprise software, getting experience with the ORE transparency layer, embedded R execution, and the statistics engine. Working with the instructors through scripted scenarios, users will understand how R is used in combination with Oracle Database to analyze large volume data.

Predictive Analytics using SQL and PL/SQL
Time: 11:10 AM - 01:50 PM
This Hands-on-Lab will be suitable, as a follow on, for those people who would have addended the 'Learn Predictive Analytics in 2 hours!! Oracle Data Miner 4.0 Hands on Lab.

As you develop Predictive Modeling skills using Oracle Data Miner, at some stage you will want to explore the in-Database Data Mining algorithms using the native SQL and PL/SQL commands. In this 2-hour Hands-on-Lab you will build upon the Oracle Data Miner hands-on-lab by exploring the existing Oracle Data Mining models, model setttings and parameters. You will be walk through the SQL and PL/SQL code needed to setup and build an Oracle Data Mining model and evaluate the performance of the models. In the final section of this hands-on-lab you will see how you can use these newly created models in Batch and Real-time modes, where you will see how easy it is to build predictive analytics into your front end applications and analytic dashboards.

Applying Spatial Analysis To Big Data
Time: 02:20 PM - 04:15 PM
Location analysis and map visualization are powerful tools to apply to data sources like social media feeds and sensor data, to uncover relationships and valuable insights from big data. Oracle Big Data Spatial and Graph offers a set of analytic services and data models that support Big Data workloads on Apache Hadoop. A geo-enrichment service provides location identifiers to big data, enabling data harmonization based on location. A range of geographic and location analysis functions can be used for categorizing and filtering data. For traditional geo-processing workloads, the ability to perform large-scale operations for cleansing data and preparing imagery, sensor data, and raw input data is provided.

In this lab, you will learn how to use and apply these services to your big data workloads. We will show you how to

  1. Load common log data like twitter feeds to HDFS, and perform spatial analysis on that data.
  2. Develop a RecordReader for this sample data so that spatial analysis functions can be applied to records in the data.
  3. Create and use a spatial index for such data.
  4. Use RecordReaders for additional data formats like Shape and JSON.
  5. Use the HTML5 mapping API to create interactive visualization applications on HDFS data.

Learn how developers and data scientists can manage their most challenging graph, spatial, and raster data processing in a single enterprise-class Big Data platform.

 

Thursday January 28



Best Practices for Getting Started With Oracle Database In-Memory
Time: 09:50 AM - 10:40 AM
Oracle Database In-Memory is one of the most fascinating new database features in a long time, promising an incredible performance boast for analytics. This presentation provides a step-by-step guide on when and where you should take advantage of Database In-Memory, as well as outlining strategies to help you ensure you get the promised performance boast regardless of your database environments. These strategies include how to identify which objects to populate into memory, how to identify which indexes to drop, and details on how to integrate with other performance enhancing features. With easy-to-follow real-world examples, this presentation explains exactly what steps you need to take to get up and running on Oracle Database In-Memory.

Large Scale Machine Learning with Big Data SQL, Hadoop and Spark
Time: 10:55 AM - 11:45 AM
Many Data Scientists and Data Analysts are trying to work at scale with Statistical Analysis tools, having to either learn advanced parallel programming or resorting to different interfaces to work with data in an Oracle Database and a Big Data Cluster.

With the release of the latest Oracle R Advanced Analytics for Hadoop, users have access to the scalability of Big Data into billions of records combined with the fast Machine Learning speeds available through Spark and the Oracle algorithms, without having to leave the intuitive R environment.

Also, with Big Data SQL, the Oracle Advanced Analytics algorithms that run in-Database are able to transparently reach out to data in a Big Data Cluster, exposing the very intuitive Data Miner GUI and the Oracle R Enterprise interfaces to a Data Scientist or Analyst connected to Oracle Database.

In this session, we are going to review two use cases and benchmarks: 1) Predicting Airline flight cancellations using Logistic Regression directly against a Big Data Cluster and 2) Root Cause Analysis of semiconductor manufacturing that uses custom-built R algorithms running against data in the Database and against Hadoop to verify the potential of Big Data SQL.

Analytical SQL in the Era of Big Data
Time: 01:30 PM - 02:20 PM
This session is aimed at developers and DBAs who are working on the next generation of data warehouse ecosystems. Working in these new big data-based ecosystems can be very challenging. What if you could quickly and easily apply sophisticated analytical functions to any data—whether it’s in Hadoop/NoSQL sources or an Oracle Database instance—using a simple, declarative language you already know and love? Welcome back to SQL! This presentation discusses the next generation of SQL with the latest release of Oracle Database and how you can take advantage of its analytical power for any kind of data source, inside or outside of the database. Join this session and be amazed at how much SQL can really do to help you deliver richer big data analytics.

Extreme Data Warehouse Performance with Oracle Exadata
Time: 02:30 PM - 03:20 PM
You've implemented Exadata and are seeing a 3X performance improvement, but you were hoping for more. You've heard the mind-blowing 10X, and sometimes even 100X, improvement stories. This session shows you how to get maximum performance for your data warehouse by having the Exadata "secret sauce" work for you. With a little understanding you can get that 10X+ improvement in your environment. This session will show how to take advantage of the key Exadata specific performance features like smart scans, smart flash cache, storage indexes, HCC and IORM - as well as how to make better use of existing Oracle features like parallelism, partitioning and indexing to make sure you are getting the most out of your Exadata investment.

Attendees of this session will learn why and how their data warehouse can rival the top performing data warehouses in the world using Oracle Exadata and why there is no platform to run an Oracle database faster than Exadata. The session dives into real-world implementation examples of tuning Data Warehouse environments on Exadata at several different customers. The session doesn't just show and explain the Exadata features, but explains how to implement them. It also shows how many other Oracle data warehouse performance features, which are not Exadata specific, should be utilized and implemented on Exadata.

Worst Practice in Data Warehouse Design
Time: 03:40 PM - 04:30 PM
After many years of designing data warehouses and consulting on data warehouse architectures, I have seen a lot of bad design choices by supposedly experienced professional. A sense of professionalism, confidentiality agreements, and some sense of common decency have prevented me from calling people out on some of this. No more! In this session I will walk you through a typical bad design like many I have seen. I will show you what I see when I reverse engineer a supposedly complete design and walk through what is wrong with it and discuss options to correct it. This will be a test of your knowledge of data warehouse best practices by seeing if you can recognize these worst practices.

Thursday's Hands-on Labs

A chance to get up, close and personal with Oracle Database In-Memory

Oracle Database In-Memory Option Boot Camp: Everything You Need to Know
Time: 10:55 AM - 12:30 PM
Oracle Database In-Memory introduces a new in-memory–only columnar format and a new set of SQL execution optimizations such as SIMD processing, column elimination, storage indexes, and in-memory aggregation—all of which are designed specifically for the new columnar format. This hands-on lab provides step-by-step guidance on how to get started with Oracle Database In-Memory and how to identify which of the optimizations are being used and how your SQL statements benefit from them. Experience firsthand just how easy it is to start taking advantage of this technology and the incredible performance improvements it has to offer.


...well, there is my list of sessions and labs that you should definitely add to your personal conference agenda. It really is the perfect place to network with your data warehouse and big data peers and Oracle product management. Have an awesome conference and I hope to see you at next year’s conference.

 

Technorati Tags: , , , , , , , , , ,

Thursday, 3 December 2015

Looking forward to #ukoug_tech15 conference

Next week I will be in Birmingham at the annual UKOUG Tech conference. I will be presenting on some of the new SQL features that we added in Database 12c to support analysis of big data. I am also planning to cover some of the features that we are planning for Database 12c Release 2, which is currently in beta. The talk is titled “SQL - The Goto Language for Big Data Analytics” and it should have something for everyone - if you have not made the move to Database 12c then I will be showing some great features that will encourage to start your planning and if you are already using 12c there will be some exciting new features to look forward to with Database 12c Release 2

To give you a taster of what to expect during my session, which is on Wednesday Dec 9 at 3:30pm in Hall 9, here are some sort-of relevant Dilbert cartoons, courtesy of Scott Adams:

Dilbert-Big-Data-Cloud-In-Memory

 

 

Dilbert-Approximate-Answers

 

If you would like to download my slides in advance of the session then follow this link: http://bit.ly/1O5sy11. See you next week at the conference.

Technorati Tags: , , , , , ,

Tuesday, 1 December 2015

My highlights from DOAG 2015...

Last week I was in Nuremburg at the excellent annual German Oracle user group conference - DOAG15. This was my first visit to DOAG and I was amazed at the size of the event. It is huge but most importantly it is very well organized and definitely one of the best, possibly the best, user conference that I have attended. 

…and then there was the chance to drink some gluhwein with the database PM team on the way to the speakers dinner.

Nuremburg

 

There were a lot of Database product managers in attendance so it was great chance to catch up with my colleagues from HQ. This, combined with lots of Oracle Ace Directors made for lots of great sessions.Quite surprisingly there was a great spread of English speaking sessions and German sessions with translation. I even attempted a few German sessions that did not offer translation services. Honestly, if you are based in Europe and speak either German or English and you are looking for a conference that offers deep technical content along with product road maps about where Oracle is heading then this conference should be top of your list!

What were the highlights? Well there were so many it is hard to pick the best ones but there goes….

Obviously I have to start with my session: Using Analytical SQL to Intelligently Explore Big Data

This session is aimed at both developers and DBAs who are working on the next generation of data warehouse ecosystems. Working in these new big data based ecosystems can be very challenging. What if you could quickly and easily apply sophisticated analytical functions to any data whether it s in Hadoop/NoSQL sources or an Oracle Database instance using a simple declarative language you already know and love? Welcome back to SQL! This presentation will discuss the next generation of SQL with the latest release of the Oracle database and how you can take advantage of its analytical power for any kind of data source, inside or outside the database. Come and be amazed at how much SQL can really do to help you deliver richer big data analytics.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=504922

 

Hermann Baer’s session on Oracle Partitioning - and everybody thought it no longer works

Oracle Partitioning is arguably one of the most frequently used database tools, especially for systems with a large amount of data in or beyond the multi-terabyte range. After more than two decades of development (Oracle Partitioning first became available in Oracle 8.0 in 1997) it may appear logical to think that the development of Partitioning will at least slow down at some point. Think again. The features available in the newest version of the Oracle database include numerous new tools for all aspects of data management and data access.

Attend this lecture and learn first-hand about the latest news on Oracle Partitioning from Oracle Development.

This session covered some of the new features coming in the next release of Oracle Database 12c so Hermann’s slides are not available for download. In the meantime, make sure you know all about the new partitioning features that are in the current release of Database 12c, see our partitioning home page on OTN: http://www.oracle.com/technetwork/database/options/partitioning/overview/index.html

 

 Jean-Pierre Dijcks’s two session on 1) Big Data SQL New Features and Roadmap

The big data world moves fast. Big Data SQL at the time of the DOAG conference it will have gone through some major versions. The latest generation of Big Data SQL promises a major performance upgrade for Oracle SQL on Hadoop and NoSQL Databases. We will go into the major new features, like Storage Indexes (and more secret stuff!) that are added over the last six months, and explain their benefits, the technology underpinnings and the performance benefits.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=505254

followed by 2) How to choose between Hadoop, NoSQL or Oracle Database

With todays hype it is almost impossible to quickly see what the right tool is for the job. Should you use Hadoop for all your data? How do you ensure you scale out a reader farm? When do you use Oracle Database? Do you use all of them, if so - why?

This session clears up (most of) the confusion. We will discuss three simple criteria performance, security and cost to quickly get to the bottom of which technology you should use when. Note that this is not a product session trying to sell you a technology; the session goes to the bottom of the technologies, the core of its existence to explain why some types of workloads cannot be solved with some technologies. Also note that this is not about saying bad things about technologies. Again, we are going to basics laws of physics and explain when to use what... Why come to this session: Understand the basics of Big Data SQL and its use in big data architectures (including licensing) See how the new features impact your ability to run more queries faster and understand how to leverage these in your big data environment . See the inner workings of some of the latest functionality from under the hood perspective

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=506278

Jean-Pierre Dijcks

 

Bryn Llewellyns two sessions: 1)Transforming one table to another: SQL or PL/SQL?

You ’ve heard the mantra If you can do in SQL, do so; only if you can t, do it PL/SQL . Really? Always? What if you want to compute non-overlapping RowID ranges that completely span a table? An exact solution is trivial; but it s too slow for a table of typical size. Approximate solutions, using only SQL, are posted on the Internet. But they impose limitations, and leave generalization to the reader -- and there's more to the generalization than meets the eye.

This session examines some alternative general solutions. Some use only SQL; some combine SQL and PL/SQL. All pass the same correctness tests; all have comparable performance. Choosing one therefore reflects some rather subtle, subjective considerations. I use the vehicle of making the choice to discuss issues of best practice both SQL versus PL/SQL and within PL/SQL. You can guess what I d prefer. Your choice might be different. And I will welcome debate.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=505543


Bryn Llewellyn

Bryn was very kind to point out that I still owe him a version of his conversion process based on the MATCH_RECOGNIZE clause. It’s on my todo list (quite near the top) but it is all a question of finding the time.

 and 2) Why use PL/SQL? (mit anschlie├čenden Treffen der DOAG PL/SQL Community)

The success of any serious software system depends critically on adhering to the principles of modular decomposition principles that are as old as computer science itself. A module encapsulates a coherent set of functionality, exposing it by using an interface that exactly maps the functions it supports, and scrupulously hiding all implementation details. No-one would dream of challenging these notions unless one of the major modules is a relational database!

The structure of the tables, the rules that constrain their rows, and the SQL statements that read and change the rows are the implementation details. And Oracle Database provides PL/SQL subprograms to express the interface.

Yet many customers avoid stored procedures. They claim that PL/SQL in the database can't be patched without unacceptable downtime; that the datatypes for passing row sets between the client-side code and the database PL/SQL are cumbersome to use and bring performance problems; and that it's impractical to give each developer a private sandbox within which to make changes to database PL/SQL. These notions are myths; they are no less defensible than the myth that technology whose name doesn't start with "J" or "No" is too old-fashioned to be useful.

Customers who do hide their SQL behind a PL/SQL interface are happy with the correctness, maintainability, security, and performance of their applications. Ironically, customers who follow the NoPlsql religion suffer in each of these areas. This session provides PL/SQL devotees with unassailable arguments to defend their beliefs against attacks from skeptics. I hope that skeptics who attend will experience an epiphany.

 More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=505547


Andy Rivenes’s  session on Best Practices With Oracle Database In-Memory

Oracle Database In-Memory is one of the most fascinating new database features in a long time, promising an incredible performance boast for analytics. This presentation provides a step by step guide on when and where you should take advantage of Database In-Memory, as well as outlining strategies to help you ensure you get the promised performance boast regardless of your database environments. These strategies include how to identify which objects to populate into memory, how to identify which indexes to drop, and details on how to integrate with other performance enhancing features . With easy-to-follow real-world examples, this presentation explains exactly what steps you need to take to get up and running on Oracle Database In-Memory.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=513187

 

Kim Berg Hansen’s session on Analytic Functions 101. Kim’s session provided my favorite quote of the week - “You just can’t live without Analytical SQL functions”.

SQL gives you rows of detail, aggregate functions gives you summarized data. To solve your requirements you often need both at the same time. Or you need in one row to access data from or compare with one or more other rows. That can be messy with plain SQL often requiring accessing the same data multiple times.

Analytic functions can help you to make your SQL do many such things efficiently, that you otherwise might have been tempted to do in slow procedural loops. Properly applied analytic functions can speed up the tasks (for which they are appropriate) manyfold.

In this session I'll cover the syntax as well as little quirks to remember, all demonstrated with short and clear examples.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=506467.

Kim Berg Hansen

 

 

 

Alex Nuijten’s session on SQL Model Clause: A Gentle Introduction

The Model Clause adds spreadsheet-like functionality to the SQL language. Very powerful stuff, yet the syntax can be quite daunting. With the Model Clause, you define a multidimensional array from query results and then apply formulas to this array to calculate new values. This presentation will give you a gentle introduction in the complex syntax of the Model Clause and will show you some real life examples where the Model Clause was invaluable.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=506430. Who would have thought that you use the Model clause to create fractals but then with SQL almost anything is possible:

Using Analytical Functions to create fractals

 

 

Andrej Pashchenko’s session on 12c SQL Pattern Matching - when will I use it?

The SQL Pattern Matching clause (MATCH_RECOGNIZE) introduced in 12c is often associated with Big Data, complex event processing, etc. Should SQL developers that have not (yet) faced similar tasks ignore this new feature? Absolutely not! The SQL extension is powerful enough to be able to solve many common SQL tasks in this new, simpler and more efficient way. The lecture will provide an insight into the new SQL syntax and present a few examples that will give the developers a foretaste of what is to come.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=506436. This session was in German but the slides are a great introduction to pattern matching.

Andrej Pashchenko

 

 

 

Kellyn PotVin-Gorman’s session on The Power of the AWR Warehouse and Beyond:

The Enterprise Manager 12c Release 4 is accompanied with the new AWR Warehouse feature, which allows you to automatically extract detailed performance data from your database targets and consolidate the data into a single location within the AWR Warehouse repository. The main source of data stored in the AWR Warehouse is derived from the Automatic Workload Repository (AWR), via AWR snapshots, letting you retain long-term history of AWR data from chosen EM12c database targets, enabling full historical analysis of AWR data across databases without impacting the source or user database.

This session will cover the power behind this incredible new feature, a full review of the architecture, ETL process and then onto what everyone who's worked with AWR data really wants to know: What is the impact to the existing suite of AWR queries and how much can it really provide the business and the DBA when searching for answers.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=504331

 

Chris Saxon’s session on Edition Based Redefinition for Zero Downtime PL/SQL Changes:

Releasing code changes to PL/SQL in heavily executed packages on databases with high uptime requirements brings many challenges. It s not possible to gain the DDL lock to compile a package while it s being executed, so how do you do so without downtime?

In 11gR2, Oracle introduced edition-based redefinition (EBR). This allows you to deploy multiple versions of PL/SQL objects in the same database. Using EBR, you can release PL/SQL easily without downtime.

This talk discusses earlier methods to tackling this problem and shows how EBR is superior. It steps through the process of creating and using editions with a live demo. It also investigates some of the downsides to using EBR and techniques you can use to mitigate these.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=505547

 

Christo Kutrovsky’s session on Maximize Data Warehouse Performance with Parallel Queries

Oracle Data Warehouses are typically deployed on servers with very large number of cores, and increasingly on RAC. Making efficient use of all available cores when processing data warehouse workloads is therefore critical in achieving maximal performance. To make efficient use of all cores in a data warehouse system, skilled use of parallel queries is key.

In this presentation we'll look at how to effectively use parallel queries and how to avoid common mistakes that often cripple data warehouse performance. You will learn all about how parallel queries work, which join methods to use with them, how to avoid expensive use of temp space and how to leverage all nodes efficiently by optimizing data flow. This is a practical session, and you will leave with several techniques you can immediately apply.

More information, including a link to the slides, see here: https://www.doag.org/konferenz/konferenzplaner/konferenzplaner_details.php?id=473721&locS=1&vid=505695.

 

and that was DOAG 2015. An incredible three days in Nuremburg. Met lots of wonderful people and definitely looking forward to DOAG 2016. If you are only planning to attend one conference in Europe next year then I would definitely recommend DOAG2016. It will be around the same time next year, towards the end of November, so start making plans. The DOAG website is here: http://www.doag.org/en/conferences/conferences.html

Thanks to everyone at DOAG for a great conference and see you next year.

Technorati Tags: , , , , , ,

Friday, 20 November 2015

Review of Data Warehousing and Big Data at #oow15

“From the keynotes to the connections to the entertainment, there truly is something for everyone at Oracle OpenWorld” 

 

DW BigData Review Of OOW15
 
This year OpenWorld was bigger, more exciting and packed with sessions about the very latest technology and product features. Most importantly, both data warehousing and Big Data were at the heart of this year’s conference across a number of keynotes and a huge number of general sessions. Our hands-on labs were all completely full as people got valuable hands-on time with our most important new features. The key focus areas at this year’s conference were:
  • Database 12c for Data Warehousing
  • Big Data and the Internet of Things 
  • Cloud from on-premise to on the Cloud to running hybrid Cloud systems
  • Analytics and SQL continues to evolve to enable more and more sophisticated analysis. 
All these topics appeared across the main keynote sessions including live on-stage demonstrations of how many of our news features can be used to increase the performance and analytical capability of your data warehouse and big data management system - checkout the on-demand videos for the keynotes and executive interviews.

If you want to revisit the most important sessions, or if you simply missed this year’s conference and want to catch up on all the most important topics, then I have put together a comprehensive review of this year’s conference which is divided into the following sections:

  • Overview
  • Key Feature Areas
    • Oracle Database 12c for Data Warehousing
    • Big Data
    • Cloud
    • Analytics and SQL
  • Hands-on Labs
  • OpenWorld 2016
Each of these sections includes all the most important sessions and links to product pages and all the hottest social media sites. If you are using the iBook version some of movies have been preloaded and all the presentations have been preloaded into the iBook - if you are using the PDF version then you will have to follow the links to view the videos and to download the slides for each presentation. You can download my review in both Apple iBook by clicking here and PDF format by clicking here. Hope this proves useful and if I missed anything then let me know.

Please note that downloadable presentations for some of the sessions are not available because they included content about features that will be part of the next release of Oracle Database. For these sessions I have provided links to the relevant product pages on OTN where you will find information about all the latest greatest features of Database 12c.

Finally, if you missed this year’s conference then I guarantee that will not want to miss #oow2016 because it is going to be even bigger and even more exciting than #oow15. The last section of the review booklet provides 10 reasons why you will want to be at #oow2016. Overall it's a great read

Technorati Tags: , , , , , , , , ,

Monday, 12 October 2015

OpenWorld 2015 on your smartphone and tablet - UPDATED!

Most of you probably know that each year I publish a data warehouse guide in iBook and PDF format for OpenWorld which contains links to the latest data warehouse videos, a list of the most important sessions along with hands-on labs and profiles of the key presenters. For this year’s conference I have again made all this information available in an HTML web app that (should) run on most smartphones and tablets. The picture below shows the app running on my desktop:
App1
This year’s web app will help you get the most from this year’s conference. It includes the following:
  • Getting to know Database 12c  - a series of video interviews with George Lumpkin, Vice President of Data Warehouse Product Management and links to information about Oracle’s Database Cloud
  • Your presenters - full biographies and links to social media sites for all the key data warehouse presenters
  • Must sees sessions - list of all the most important data warehouse presentations at this year’s conference
  • Must attend labs - list of all the most important data warehouse hands-on labs at this year’s conference
  • Links - a list of links to the most important data warehouse sites
  • Live Twitter - twitter feeds from our product management teams
  • Maps - some useful maps for the local area to help you get around the various conference locations, grab a coffee, get something to eat 
OCT 12 - UPDATED: Added new sessions and incorporated some scheduled changes to existing sessions as well.
To run this web app on your smartphone/tablet follow this link: http://tinyurl.com/nhjmszz
Android users: Last year when I tested this app on my Samsung S3 there was a bug in the way the Chrome browser displayed frames because scrolling did not work . This year I have swapped to an LG3 android phone and everything appears to be working correctly. However, if the app does not work as expected on your phone then try using either the Android version of the Opera browser.
If you have any comments about the app (content you would like to see) then please let me know. Enjoy OpenWorld.
 

Technorati Tags: , ,

Thursday, 8 October 2015

Android App for Data Warehousing and Big Data at #oow15

 

Android App for OOW15

 

A number of people have asked if I could make a smartphone app version of my usual OpenWorld web app (OpenWorld 2015 on your smartphone and tablet) which does not require a data or wifi connection. For Android users there is some good news: here is my new Android app for OpenWorld that installs on your smartphone and runs without needing a wifi connection or a mobile data connection.

This DW and Big Data app is nice and small so it is quick to download and install onto your phone. It contains everything you need to know about data warehousing and big activities at OpenWorld. The app has sections for:

  • Key Oracle Speakers - arranged by category 
  • Data warehouse and big data sessions by Oracle product managers, key partners and customers. Organized by category and day-by-day
  • Hands on Labs. Organized by category and day-by-day
  • Maps to help you find your way around Moscone and the surrounding area

To download the Android app click here: DWBDoow15.apk. Please note that your phone may require you to allow installation of apps from sources other than Google Play Store. The app does not require access to any other app, emails, logs, contacts etc on your phone.

My plan for next year I will have the app posted on both the Google Play Store and the Apple App Store. Therefore, it would be really helpful to get some feedback please on this app - is it a good idea to have a dedicated app, what additional content would you like to have, etc etc.

Hope this helps you get organised for this year’s incredible conference. Enjoy OpenWorld. If you get a chance then please stop and say “Hi” at our demo booth: SQL Analytics and Analytic Views, the booth id is SLD-046. 

  

Technorati Tags: , , , ,

Tuesday, 6 October 2015

Online Calendar for Data Warehousing and Big Data Sessions at #oow15 now available

Cw22 lg oow 2227098

 

I have published a shared online calendar that contains all the most important data warehouse and big data sessions at this year’s OpenWorld. The following links will allow you to add this shared calendar to your own desktop calendar application or view it via a browser:

 

oow-calendar

 

The calendar also includes all the keynote sessions and most importantly, the hands-on labs. Please note that the hands-on labs are taking place at the Nikko Hotel which is at 222 Mason St, map here, (includes directions from Moscone Center). You should allow 12-15 mins to walk from Moscone Center to the Nikko Hotel. 

Unfortunately, your calendar is unlikely to have all the glorious colors that you see in the picture although I am not sure why that information is not encoded into the HTML page view. The colors do serve a purpose because they identify the type of session:

  • RED - keynote session
  • LIGHT RED - data warehouse/big data general session
  • YELLOW -  big data session
  • GREEN - analytics session
  • LIGHT BLUE - unstructured data
  • BLUE - hands-on lab

Hope this helps you get organised for this year’s incredible conference. Any comments then let me know and if I missed your data warehouse/big data session then let me know and I will add it to the calendar.

Enjoy OpenWorld. If you get a chance then please stop and say “Hi” at our demo booth: SQL Analytics and Analytic Views, the booth id is SLD-046. 

  

Technorati Tags: , , , ,