In most cases, I consider data (i.e. rows, columns, byte
streams) an artifact of a business transaction. Acquire a new customer, there’s
data for that. Take and fulfill an order, there’s data for that. Receive
payment for goods and services, there’s data for that. While data is part and
parcel to conducting business, can it be used to gain insight and awareness of
something more than just the specific transaction it represents?
Think about it… the data you have accumulated over weeks,
months and years represents knowledge and experience. If you can use this data
in new and creative ways, you will gain more intelligence about the business that
was conducted, as well as that of your customers and clients. This wisdom is
becoming the Holy Grail for companies (and thieves); made ever more lucrative
by Big Data and the increasing use of social media, aggregation via centralized
solutions and cloud based infrastructure. Facebook and Google know who you like
and where you go. Amazon knows what you like, even before you do. According to
the NY Times, companies like Target seem to know more about your current biological
state than your family and friends do.
The phrase for this surveillance, exploration and analysis
is traditionally known as business intelligence (BI). The effort and energy that
produce information from data include simple query and reporting on one end of
the spectrum, and deep mining, sophisticated analytics and predicative modeling
on the other.
It seems everyone is doing some kind of query and reporting,
but relatively few are embarking on extracting real value from data through
systematic and sustained analysis. To be honest, what I mostly see when looking
into DB2 for i query performance issues is frequent and large data extracts
from production databases that are then used to feed an ever growing field of
Excel spreadsheets. It is always interesting to ask “hey, who’s actually reading
the thirty thousand rows coming back from this query?” Or, “who’s running this
big QUERY/400 report every hour?” From what I can tell, QUERY/400 has become an
extraction tool to get data OUT of DB2 for i. C’mon folks; there is a better
way to turn data into useful information.
A number of years ago I referred to this notion of
extracting data and building local databases to support spreadsheet gurus as
“guerilla data warehousing”. That is to say, one of the most common analysis
and reporting processes used in business today involves all of the basic BI
components – extracting/transforming/loading of data, warehousing of data, and
reporting of data – but it contains none of the best practices for scalability,
performance, security, uniformity and consistency that a robust knowledge management
system requires. I have nothing against the use of Excel spreadsheets. I do
have a problem with everyone in the sales department building mini data
warehouses on their desktops. Maybe this is how my personal sales transaction history
walks out of an establishment. I’m just saying…
So, you are using IBM i to run your business operations.
Have you thought of extracting more value from the data being produced day in
and day out? Keeping your structured and unstructured data inside of DB2 for i,
and using a proper BI architecture along with appropriate tools and methods is
the easiest and most cost effective way to gain wisdom and insight. This in
turn will allow you to make decisions and take actions to increase
productivity, profit and customer satisfaction. Even if you prefer an Excel
spreadsheet interface, there are many query and reporting tools such as DB2 Web Query that can likely meet your requirements.
For some additional insights into analytics check out this
video featuring our own Mark Anderson – Chief Architect of DB2 for i.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.