Meet up with Ciber’s Analytics Thought Leaders at IBM’s IOD 2013

on Oct 8th, 2013 in BI, Analytics, & Performance Management, Big Data, IBM, | Comments Off on Meet up with Ciber’s Analytics Thought Leaders at IBM’s IOD 2013

Do you have questions today that traditional reporting and analytics can’t answer? Is there other unstructured, social media or streaming data that you can’t process or analyze but you know can give you additional clarity or predictability to answer questions you were unable to answer before?

You may have a big data challenge. More importantly, you have a greater opportunity to make an impact by incorporating a big data approach to traditional business analytics.

Come join us at IBM’s Information on Demand 2013 Conference and meet up with Richard Gristak and Allen Shain, Ciber’s thought leaders on business analytics and big data. You will hear how Ciber delivers big data solutions across healthcare, telco and banking and many other industries, and how they can assist you in gleaning true business insights from your big data challenge.

Just drop a note to us using the IOD message center under the agenda builder. We look forward to seeing you at IBM’s IOD 2013!

Big Analog Data™ – The Oldest, Fastest, and Biggest Big Data

on Aug 1st, 2013 in Enterprise Integration & IT Strategy, IBM, | Comments Off on Big Analog Data™ – The Oldest, Fastest, and Biggest Big Data

In my role at Ciber, I have had the opportunity to talk with industry leaders who are making a difference in helping companies understand how to gain business value from their data.  In the area of Big Data, one of those thought leaders is Dr. Tom Bradicich, Fellow at National Instruments, a Ciber strategic alliance partner.  Next week, at NIWeek, National Instruments’ global conference in Austin, Texas, we will be talking with other NI partners and attendees about how they can unlock the business value of Big Analog Data.

We have invited Tom to be our guest blogger this week and provide his views on this subject.

Tom Bradicich, PhD
Fellow and Corporate Officer
National Instruments | |

For my job at National Instruments, I travel the world and see firsthand how engineers and scientists are acquiring vast amounts of data at very high speeds and in a variety of forms. I’ve seen how tens of terabytes can be created in just a few seconds of physics experiments. And similar amounts in hours, by taking measurements of jet engines or testing a turbine used for electric power generation. Immediately after this data acquisition, a big data – or “Big Analog Data™” – problem exists. From my background in the IT industry with IBM, it’s clear to me that advanced tools and techniques are required for data transfer, management, and analytics, as well as systems management for the many data acquisition and automated test systems nodes.

I call this the “Big Analog Data™” challenge simply because it’s both big and “analog”. That is, the sources of this data include physical phenomena generated by nature or machines. For example, light, sound, temperature, voltage, radio signals, moisture, vibration, velocity, wind, motion, magnetism, particulates, acceleration, current, pressure, time, location, etc., illustrated in the figure. When testing a smart phone at the end of a manufacturing line, there are many analog phenomena to measure, such as sound, three radios (cell, Bluetooth, and Wi-Fi), vibration, light, touch, video, orientation, and location.
Related to this challenge is data archive management. I’ve spoken with engineers who commonly tolerate a type 2 error – keeping worthless data – in order to avoid committing a type 1 error – discarding valuable data. Advances in real time data analytics will help bifurcate the data accordingly, however, I’m not sure if these advances will cause us to discard more data or to keep more data.

Characterizing Big Analog Data™

In general, Big Analog Data™ is a form big data, which is commonly found in the literature to be characterized by a combination of three or four “V’s” – Volume, Variety, Velocity, and Value. In addition, another “V” of big data I’m seeing is “Visibility”. This describes globally dispersed enterprises needing access to the data in multiple locations, to both do analytics and see results.

Big Analog Data™ is distinguished from all other big data in three fundamental ways. First, it’s “older”, in that many Big Analog Data™ sources are generated from natural analog phenomenon such as light, motion, and magnetism, etc. These natural sources have been around since the beginning of the universe. I think…

Second, it’s “faster” since some analog time-series signals require digitizing at rates as fast as tens of gigahertz, and at much wider bit widths than other big data. And third, it’s “bigger” because Big Analog Data™ information is constantly being generated from both nature and electrical and mechanical machinery. Consider the unceasing light, motion, and electromagnetic waves all around us right now.

The Value of Big Analog Data™

When selling to non-technical businesses such as retail or travel, the sales proposition for big data is fundamentally two phases. Phase one is, “you should acquire lots more data because there’s great value in it”, and phase two is, “you should buy my hardware and software to extract the value”. However, in my business – test, measurement, and control – phase one is usually skipped because engineers and scientists inherently understand statistical significance. That is, it’s intuitive that small data sets can limit the accuracy of conclusions and predictions.

With advanced acquisition techniques and big data analytics, new insights can be derived that have never before been seen. For example, I’m working with companies seeking greater visibility into test and asset monitoring data. We’re helping them indentify emerging quality trends or predict machine failures. With rotating machinery, converting an unplanned surprise outage to a planned maintenance outage has great value. In scientific labs, we work to accelerate discovery with high speed, highly accurate measurements in their experimentation.

The 3 Tier Big Analog Data™ Solution

To enjoy these benefits, end-to-end solutions are needed for maximum insight in the most economical way. I’ve seen cases where many devices are under test, and many distributed automated test system nodes are needed. Since these test systems are effectively computer systems with software images and drivers, the need arises for remote network-based systems management tools to automate their configurations, maintenance, and upgrades. Growing data volume forces global companies to ensure data access by many more engineers and data scientists than in the past. This requires network gear and data management affording multiuser access to geographically distributed data. I’m seeing the cloud gaining favor for both data access and scalability of simulations and analytics.

Big Analog Data™ solutions are portioned into a three-tier architecture as shown in the figure. These tiers come together to create a single, integrated solution adding insight from the real time point of data capture (sensors) to analytics in the back-end IT infrastructures. Data flows across “The Edge”, which is the point where data acquisition and test system nodes connect to traditional IT equipment. Data then hits a network switch in the IT Infrastructure tier, where servers, storage, and networking manage, organize, further analyze, and archive the data.

It’s interesting to note that in the IT industry, the point at which data first hits a server is referred to as “real time”. However in my world – the test and measurement industry – by the time data flows through the middle tier over The Edge and hits a server, its quite aged. That said, the spectrum of value spans the entire five phases of the data flow (see above Figure), from real time to archived. Real time analytics are needed to determine the immediate response of a control system and adjust accordingly, such as in military applications or precision robotics. At the other end, archived data can be retrieved for comparative analysis against newer in-motion data, to gain insight into the seasonal behavior of an electrical power generating turbine.

Significant in-motion and early life analytics and visualization takes place in the solution’s middle tier, via National Instruments CompactRIO, NI CompactDAQ and PXI systems and software such as LabVIEW, DIAdem and DataFinder. Through my experience with end-to-end solutions, I know the value of skilled systems integrators such as Ciber. A strong focus on the interplay among the solution tiers greatly lessens the deployment and integration risk, and reduces the time-to-value.

Well, enough about me and what I think; reply below to let me know your Big Analog Data™ thoughts.

Is Flash Storage Misunderstood?

on Jul 18th, 2013 in Big Data, Enterprise Integration & IT Strategy, IBM, Servers, Storage, and Software, | Comments Off on Is Flash Storage Misunderstood?

A recent Wikibon article caught my attention called “Flash and Hyperscale – Changing Database and System Design Forever”  because it echoes what I’ve been hearing lately about Flash Storage.

(For those of you not familiar with Wikibon, it is an online, professional community that features some insightful, moderated sessions and short research notes on challenging business technology problems.)

In exploring the economics of flash storage, the article addresses three objections often heard that show a profound misunderstanding of computing systems in general and storage systems in particular:

  • Flash – that’s only for Wall Street traders and other fringe applications – it’s a small segment of the market – (every disk vendor);
  • Disk drives will continue to rule; there is insufficient capital to builds flash fabs – (CEO of RAID storage system manufacturer);
  • Flash technology is going to hit the wall after the next generation – (every disk vendor, and proponents of other non-volative storage such as MRAM and RRAM).
  • Flash storage on a server cannot be protected – (every disk vendor who believes that data on SANs is 100% protected)

These are common doubts that I’ve heard expressed at client meetings and on tradeshow floors.

But what caught my attention are the conclusions drawn in the research article –that current applications and application designs are severely constrained by IO (we call that the big data bottleneck), and that low-latency IO reduces the constraints.  They present graphs, research results, and case examples to demonstrate that businesses are becoming 20 percent more efficient with flash technology.

The author, David Floyer, believes this is a profound change in IT and the potential value IT can bring to organizations – I agree, based on the benefits we have seen from Ciber’s Big Data storage solution powered by IBM’s FlashSystem technology.

I also agree with his projection that this trend will start with mid-sized companies, who will move faster to ensure competitive advantage.  I’m generally not so agreeable – but based on the first-hand results we’ve witnessed by implementing FlashSystem, it’s easy to support their conclusions.

Do you have other objections you’d add to the list above?

Cloud Integration: Living in a Hybrid World

on May 21st, 2013 in IBM, | Comments Off on Cloud Integration: Living in a Hybrid World

At IBM’s recent PartnerWorld conference, there was much discussion on the future of the IT industry and how it affects the partner community.  Cloud, Mobile, Social, and Analytics were strong topics of discussion, but there was one common thread: IT is no longer making all the decisions about IT.

What was reinforced at PartnerWorld (and what I’ve been hearing from clients) was that increasingly, lines of business are making IT decisions. You now see marketing or sales or HR evaluating applications, particularly cloud-based ones. They are signing agreements for these applications and then informing IT that they have done so. In turn, IT has to figure out how to integrate the new applications with the rest of the enterprise system.

This phenomenon is creating a hybrid world where living with a combination of on-premise and cloud-based applications is increasingly the norm for companies. The challenge then becomes determining how to integrate these applications in a way that is quick, cost-effective and flexible. Flexibility is particularly important, because integration work should never really be considered final. Integration needs will continue to grow as cloud use grows.

Check out my four-minute video interview from IBM PartnerWorld where I talk with industry leader Paul Gillin of Profitecture about these trends and how we’re helping clients effectively address their cloud integration challenges with the IBM Cast Iron solution.

IBM Unveils PureData Systems at IOD 2012

on Oct 29th, 2012 in BI, Analytics, & Performance Management, IBM, | 1 comment

At the Information on Demand conference this past week, IBM unveiled its newest line of products underneath the PureSystems umbrella, the PureData family of software appliances.   Designed to address the unique service requirements of different types of applications, these newest offerings from IBM optimize existing software products for transactions and analytics, with workload optimized on the infrastructure.  Common among all of the PureSystems family are factory-integrated server, storage, network, and software resources.   What PureData brings forward is an expanded family within the area of data management and analytics.

PureData System for Transactions

Designed for applications that have high transaction processing requirements, PureData for Transactions comes with a highly-scalable DB2 platform that IBM claims can be up and running in hours rather than days or weeks.   This solution works perfectly in situations where the database is hit with random reads and random updates.  It will be interesting to see how IBM positions this against their DB2 PureScale offering, and some might see this as PureScale offered as an appliance.   Regardless, IBM has taken a definite step forward in its challenges competing against Oracle’s Exadata, especially considering that buried under the announcements was that IBM included an Oracle database compatibility feature that enables the movement of data to Puredata System for Transactions with “little or no application changes”.  It will be interesting to see how well this plays in large enterprises that are standardized on Oracle database, but have extensive IBM infrastructure.

PureData system for Analytics

Think Netezza, one of IBM’s smartest acquisitions in the recent years.  In fact, the sub-header under the product name is “Powered by Netezza technology”.   Basically designed as the follow-on product to the widely successful line of in-memory data warehouse appliances, IBM has continued to leverage Netezza analytics foundation, with database portability from DB2, Informix, Oracle, Teradata and others.   We expect to continue to see this as the core foundation for analytic applications, with more Cognos BI solutions pre-built and optimized for the business users, as IBM concentrates on analytics marketplace in areas such as customer analysis and financial performance, and where the data warehouse requirement includes sequential reads and dataloads.  Underlying the infrastructure are the high availability design elements that support internally redundant S-Blades, with little or no degradation in the event of a processing node failure.   Further features for developers are the included licenses for InfoSphere BigInsights and InfoSphere Streams.

PureData Systems for Operational Analytics

IBM Smart Analytics System (ISAS) based on Power Technology has also found rebirth in the PureData Systems for Operational Analytics.   Positioned for data warehouses with a combination of random and sequential reads and data loads, this version of PureData appliances supports applications where analytics are split into many parts, but with narrow scope operations, running in parallel.   Designed to handle more than 1,000 concurrent operational queries, IBM states that the architecture distributes data across servers, ensuring that this shared-nothing concept of parallel processing ensures that as the data warehouse grows, performance will not degrade.   With data capacity up to a petabyte and uncompressed dataloads of over 8 gb/hour, this system is set to be the workhorse of IBM’s software solutions for enterprises requiring high-energy data warehouses and analytics.


IBM has certainly embraced the concept of purpose-built software appliances that are designed from the ground-up to deliver solutions designed to address business requirements.   Having learned from the highly successful DataPower, and Netezza acquisitions, and leveraging the PureSystems Expert Integrated System concepts, PureData takes much of the implementation, integration, and management challenges and turns it into business value.   IBM’s approach is to deliver rapid time-to-value, and when viewing their 10-year roadmap for PureSystems, this is only the beginning.

The Challenge of Innovation

on Apr 28th, 2010 in | 7 comments

Innovators of the 16th century were in danger of being jailed, hanged, or excommunicated.   Even Machiavelli wrote of the challenges of innovation and change.   In today’s time, they are praised and promoted.  Yet even so, innovation can come at a price.  While many companies widely tout that their products or strategies are “innovative” , developing new ideas to meet customer needs and solve business problems can take time and money.    In fact some companies are spending thousands, if not millions of dollars, in performing product development when practical and less expensive solutions are available to meet their needs.

At CIBER, we have internalized the concept of “Practical Innovation”.  That is, innovative solutions for a client, while staying practical, careful, and mindful of their budget and specific needs.     Recently, at Collaborate 2010, the Oracle User Group conference, CIBER demonstrated its solution for extending the JDEdwards EnterpriseOne application to mobile devices without middleware.  The advantage to our JDE clients is that without buying additional software, they can deploy the core application to the field at the cost of the scanners, which they might already own, and the services to implement.    Competitive solutions require the purchase of a software stack with ongoing maintenance and upgrade charges.  Thus, with this offering, CIBER is delivering additional value while utilizing assets already owned by the client.

In May, at the COMMON conference for companies that utilize the IBM mid-range servers, CIBER will be showcasing its data replication solution for IBM Power servers.  By partnering with Crossroads, CIBER is able to offer to its clients a virtual tape library which reduces the backup windows and leverages backup and recovery management software from IBM (BRMS and Tivoli), and also from HelpSystems.    At a lower cost than other Virtual Tape Library (VTL) products, our clients are able to gain productivity improvements, replicate to off-site locations, and maintain near-online availability of their data. 

In San Antonio, at CUE, the Lawson user group conference, CIBER’s Lawson Practice will continue to roll out its series of CIBER packaged offerings – low cost, highly valued offerings that meet the unique challenges met by many Lawson clients.    Application security, performance testing, data archiving, and other topics are addressed in narrowly focused projects that have rapid ROI and a high degree of success.

So as you look for innovative strategies, keep in mind that they don’t have to be overly expensive or take long to implement.   Following the “Practical Innovation” approach can not only save money, but you won’t get hanged in the process.

Powered by WordPress