Category Archives: News

Big Data: 6 Bold Predictions For 2015

December 19, 2014
Jeff Bertolucci

‘Tis the season when industry soothsayers don their prognostication caps to make fearless forecasts for the coming year. What does the crystal ball say about big data?

We culled an assortment of 2015 predictions from big data executives and analysts. The overarching theme: Big data gets real next year, as does the Internet of Things. What do you think? Will these prophecies come true, or are better suited for the Psychic Friends Network?

Prediction #1: Big data proves it’s more than just big hype.
“In 2014 the booming ecosystem around Hadoop was celebrated with a proliferation of applications, tools, and components. In 2015, the market will concentrate on the differences across platforms and the architecture required to integrate Hadoop into the data center and deliver business results.” — MapR CEO and cofounder John Schroeder.

Prediction #2: On a similar note, 2015 will be Hadoop’s “show me the money” year.
“Hadoop has been rapidly adopted as ‘the way’ to execute any go-forward data strategy. However, early adopters must now show return on investment, whether it’s migrating workloads from legacy systems or new data applications. Luckily, products and tools are evolving to keep pace with the trajectory of Hadoop.” — Gary Nakamura, CEO of Concurrent.

Prediction #3: Location services move indoors.
“Indoor location technology and services will rapidly gain traction. Where previously WiFi was the primary enabler to position a mobile device indoors, its inability to calculate elevation, coupled with errors introduced through signal noise, has meant that using WiFi alone indoors was frequently not accurate enough. However, with BLE (Bluetooth Low Energy) beacons now increasing in number, these can combine with WiFi access points while using the device-embedded MEMS (Micro-electro-mechanical-systems) sensors to provide accurate location indoors.” — Juniper Research’s “Top 10 Tech Trends for 2015” whitepaper.

Prediction #4: Connected cars might grab the headlines, but other IoT devices will prove a lot more useful.
“Autonomous vehicles such as drones and self-driving cars will dominate public perception of the IoT. Less-glamorous connected objects will make the greatest impact on people’s lives — many without them even knowing it.” — Brian Gilmore, an Internet of Things and industrial data expert at Splunk

Prediction #5: You, too, can be a data scientist, no PhD required.
“As data becomes more accessible and analytic tools become easier to use and readily available, data science won’t be limited to those in the technology sector. In 2015, anyone with the right tools can draw powerful insights from data. “We’re not blasting CS degrees but in 2015, data scientists’ skillsets will be vastly different, especially as the ability to code will be less of a job requirement. Data scientists should take a page out of anthropology and understand that qualitative information can also provide answers to questions you didn’t know you had.” — Lukas Biewald, CEO and cofounder of CrowdFlower, a data-mining and crowdsourcing service.

Prediction #6: The Internet of Things will have a big impact on customer service, with consumers expecting more personalized interaction with vendors.
“The Internet of Things changes the entire customer service dynamic; rather than a limited number of customer communication channels, customer experience management (CEM) systems will be able to process live streams of data from fitness wearables, motor vehicles, home appliances, and medical instruments, to name only a few categories of connected devices on the horizon. When collected, correlated, and applied, the data from these devices will coalesce into an unprecedented view of the customer’s needs, resulting in far greater competitive advantage for those who are aware of the possibilities.” — Keith McFarlane, CTO and senior vice president of engineering at LiveOps, a cloud-based customer service provider.

15 APM Predictions for 2015

December 16, 2014
APM Digest
Part 1:
Part 2:

The annual list of Application Performance Management (APM) predictions is the most popular post on APMdigest, viewed by tens of thousands of people in the APM community around the world every year. Industry experts – from analysts and consultants to users and the top vendors – offer thoughtful, insightful, and sometimes controversial predictions on how APM will evolve and impact business in 2015.

Some of the predictions on this year’s list have never been seen on our previous APM Predictions lists. Some predictions foresee total upheaval in APM and related markets. Other predictions are continuations from previous years, maybe a little more predictable but no less important or disruptive.

Overall, the outlook remains strong for APM in general. “The APM sector will continue to thrive in 2015,” predicts Karun Subramanian, an application support consultant who started posting on APMdigest this year. “It is amazing how a number of businesses still have not implemented an APM solution.”

The list of predictions show there is a lot of potential for APM in 2015, not just because many organizations still need to get onboard with APM, but because the technology is advancing so rapidly and expanding in so many directions.

Some predictions will be right on the money, and others may not come true. Many of the predictions overlap, just as concepts such as end user experience, mobile APM and DevOps all overlap in the real world. This list is not intended to be clear cut or definitive, but all of the predictions are interesting and make great reading that will get you thinking about all the exciting possibilities for next year.

The first 5 predictions are posted below. The next 5 will post tomorrow, and the final 5 will post on Thursday, followed by some more in-depth predictions from our bloggers in the following days.

A forecast by the top minds in Application Performance Management today, here are 15 APM Predictions for 2015 – Part 1:


In a time when businesses are literally being re-coded by software, applications have now become the face of the business. In the age of rapid adoption and rapid rejection, enterprises have mere seconds to impress their users. In 2015, Application Performance Management solutions will not be just about the performance of applications or business transactions, but its focus will now move to helping enterprises inspire their users and deliver exceptional user experience in order to earn their loyalty.
Anand Akela
Sr. Director and Head of APM Product Marketing, CA Technologies

Adoption of mobile workspaces to provide anywhere, anytime access to workforce apps will cross the chasm in 2015. As a result, organizations will need to validate expected gains in workforce productivity with a unified approach to End User Experience Management (EUEM) that covers mobile, virtual, and physical devices. While advanced analytics will play a critical role in providing insights into the impact of infrastructure performance in these converged environments, organizations focused on transforming their businesses into proactive enterprises will make EUEM the center of their monitoring strategy in order to effectively measure, manage, and improve workforce productivity.
Mike Marks
Chief Product Evangelist, Aternity

I predict that more enterprises will adopt a strategic, unified approach to application performance and user experience to improve employee productivity and engagement, and to build customer satisfaction and loyalty. Increasingly, C-level executives will recognize the link between assuring consistently superior user experiences and achieving strategic objectives and financial outperformance. This will make a unified performance analytics platform second only to database as the most strategic software in the enterprise. More vendors will have to evolve their offerings toward a framework approach.
Gabe Lowy
Technology Analyst and Founder of Tech-Tonics Advisors

Digital systems that deliver experiences and support digital commerce will become the systems of record providing clear lines from business outcomes to user behaviors to user experience to delivery infrastructure. This likely means that APM systems will begin to adopt more customer experience and analytics capabilities to help drive contextual customer experiences and understand not just what happened but what user experiences led to less desired outcomes and how can we improve.
Ken Godskind
Chief Blogger and Analyst,

The sophistication of inbuilt UEM/RUM capability will evolve in a number of dimensions, in particular: object level data and session based metrics – optimizing business relevance.
Larry Haig
Senior Consultant, Intechnica


Growth of the Borderless Enterprise: Last year, “hybrid cloud” was the shiny new buzzword. Today, applications are also incorporating mobile technologies, social media, and Internet of Things (IoT) data into hybrid environments, while continuing to integrate to cloud, partner, provider, and customer application ecosystems. Viewed from the “end to end APM” perspective, the illusion of control has essentially vanished, yet the need for visibility to performance and availability remains. While 2014 has been a year of explosive change, it’s likely that 2015 will see APM vendors and their customers digesting these changes and adapting accordingly.
Julie Craig
Research Director, Application Management, Enterprise Management Associates (EMA)

In 2015, the ubiquitous nature of the cloud (especially SaaS application delivery), user mobility and wireless access will continue to usher in the age of the borderless enterprise. IT will encounter difficulties ensuring the end-user experience and efficiency of its workforce, obligating IT teams to re-think their performance management strategies in light of the expanded domain. Key considerations will include the ability to measure end-user experience regardless of location and establishing standard operating procedures for the implementation of new technologies and applications. IT teams will need to have full control of applications on the network, including the ability to evaluate service level agreements with SaaS providers. IT tool vendors will also need to reconcile the features they provide within the borderless enterprise by adjusting their APM and AANPM product line-up in order to fill this new IT visibility gap.
Bruce Kosbab
CTO, Fluke Networks


In 2014, we saw a significant rise in the adoption of APM as a concept. This year, we also saw a rise of various sub-domains within APM, including data analytics, mobile APM, and DevOps. 2015 will be about consolidating all those sub-domains and thereby meet user expectations by bridging the gap between IT and digital groups within the organization.
Suvish Viswanathan
Manager, Product Marketing & Analyst Relations, ManageEngine

Today, traditional application and infrastructure monitoring, log and event analysis, user response time monitoring, byte-code type instrumentation and other tools are all important in helping IT understand what’s happening with their applications. However, each also only tells a portion of the story, meaning IT is left to try to piece together disparate data to get a holistic view of their application stack, which is no easy task. In 2015, IT will increasingly be able to see across more of these dimensions with an integrated view as vendors bring the capabilities each of these tools provide together for a much improved IT experience.
Michael Thompson
Director, Systems Management Product Marketing, SolarWinds

I predict that APM tools will start to incorporate bandwidth monitoring alongside application performance. As high bandwidth intensive applications become business critical, root cause analysis becomes challenging without the ability to pinpoint the cause of the slowdown with confidence. Monitoring bandwidth and application performance simultaneously provides a more holistic understanding of the ecosystem’s health.
Megan Assarrane
Product Marketing Manager, Ipswitch

APM platforms will have to evolve to support the hybrid enterprise, and provide the end-to-end insight into performance and help IT achieve even faster Mean-Time-To-Resolution. As cloud and mobility are becoming mainstream in 2015, APM has to enable IT to pinpoint issues across the entire stack – from the end user device, through the network to the app or data tiers hosted in the datacenter or in AWS or Azure. In addition to the holy grail of the end-to-end visibility, APM will be driven to deliver high resolution data and analytics to enable faster diagnose and repair cycles.
Peco Karayanev
Sr. Product Manager, Riverbed


Mobile app APM will be the key focus in 2015 as mobile usage continues its growth, with the enterprise space now becoming significant. Closely tied to performance testing is security testing: mobile app security will also rise in importance in 2015.
Michael Azoff
Principal Analyst, Ovum

The market will continue to experience a rise in mobile APM. Today, we barely have any visibility into mobile app performance; it’s a black hole. As businesses see an increase in revenue through their mobile apps, the IT team’s imperative will be to provide a consistent user experience on all interfaces, both web and app. We will see mobile operation teams and Web app operation teams come together to ensure this unified experience while going beyond crash reports for apps.
Suvish Viswanathan
Manager, Product Marketing & Analyst Relations, ManageEngine

Shop Direct’s CEO commented this year that 50% of the company’s consumers viewed its site through a mobile device, but in 2015 100% of their customers will test content through their mobile app. 2015 will be do or die in terms of mobile. Mobile channels are exploding and businesses need to get the mobile experience right this year — not just one time, but on an ongoing basis with the right kind of APM tools. But Mobile APM as a standalone application will no longer exist in 2015. Businesses will realize the importance of reliance on backend infrastructure, which will necessitate end-to-end visibility across all of their applications from one comprehensive APM solution.
Maneesh Joshi
Sr. Director and Head of Product Marketing and Strategy, AppDynamics

We are in the midst of a massive mobile surge that is changing the way customers, employees or partners engage with businesses. Mobile is now a primary touch point and a driving force in the way millions of users bank, shop and transact. As a result of this shift, which will only continue to increase throughout 2015, organizations will need to focus on mobile application and web-delivered experiences. With this in mind, organizations must make optimizing performance, and capturing and analyzing all end user experiences, a top priority both to drive business success and to understand how users interact and engage across digital touch points.
Erwan Paccard
Director of Mobile Performance Strategy, Dynatrace

A “mobile first” mentality will become the focus in APM in 2015. As application developers realize the need to focus on developing apps that work on multi-device, multi-OS and in multi-scenarios, they will look to monitoring from the end user perspective throughout the process. The developers want to create the optimal user experience regardless of the user’s scenario, device or network and will use continuous delivery to effectively drive coverage complexity. This shift will tighten the loop between Dev and Ops with a refocus of APM solutions to adopt cloud-based, real devices and provide insight into the end user experience.
Amir Rozenberg
Director of Product Management, Perfecto Mobile

In 2015, traditional industries that have the most to lose from providing low quality mobile experiences will adopt mobile APM. At the top of the list will be retailers. These companies will follow best practices established by more nimble mobile first and mobile-dominant firms which in 2014 recognized that mobile client code needs to be watched.
Ofer Ronen

The trend for 2015 is clear, flexible work environments! Companies have tremendous interest in creating and empowering a mobile workforce to increase productivity and enhance employee work life balance. The key to success is user adoption and that requires reliable access, optimal app responsiveness and a solution for monitoring end-to-end performance. Only through real-time performance data, intelligent historical analytics and automatic correlation will IT pros have the universal insight and actionable intelligence they need to fine-tune their environment, proactively diagnose potential problems, prevent unscheduled downtime and keep end-users happy and productive.
Srinivas Ramanathan
CEO, eG Innovations

From gleaning the product reviews on IT Central Station, I can share with you one of the top features that real users cite as lacking in current APM tools: mobile app monitoring. I predict vendors will address this critical need in 2015.
Russell Rothstein
Founder and CEO, IT Central Station


The advent of the “Internet of Things” (IoT) will elevate the importance of implementing powerful, easy-to-use and cost-effective APM solutions as a rapidly expanding universe of end-points are connected by software-enabled sensors and systems. The new generation of APM solutions will have to contend with an exponentially greater number of connections, transactions and data points. The APM solutions will also have to span Cloud and on-premise applications which will be linked together in the IoT environment. The task of implementing and administering the APM solutions will increasingly be performed by highly specialized, third-party service providers.
Jeffrey Kaplan
Managing Director of THINKstrategies and Founder of the Cloud Computing Showplace


APM as a monitoring entity is expanding with new sub-categories of technology that complement its demeanor. Some of these technologies will remain on the periphery; however, others will naturally become part of APM as the market is solidified. I foresee the advanced analytics and behavioral learning technologies being incorporated as product offerings from the most advanced APM solutions that are on the market today.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

In 2015, analytics will continue to a be a top APM feature. In 2014, we saw a number of APM solutions bringing out analytics features. This will continue in 2015 as APM increasingly becomes about making forward looking insights by allowing easy querying and presentation of application-centric insights. In addition, in 2015, the primary reason to invest in APM solutions will not be to reduce MTTR. Since the dawn of monitoring and management solutions, their main benefit has been to reduce MTTR. This will change in 2015 as smart IT leaders will realize that good APM and analytics solutions should prevent application issues from happening in the first place. Therefore in 2015, I expect that IT leaders will focus on APM and analytics solutions that can improve metrics such as mean-time-before-failure (MTBF) rather than making MTTR reduction the base necessity.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

Analytics will begin serving the needs of others, and the advantages of deep instrumentation will begin to show differences between products which have APM capabilities and those which do not. We will see advances in distributed network analysis, which were previously not handled by today’s cast of characters. Analytics will begin to advance beyond the search and presentation focused offerings of today.
Jonah Kowall
Research VP, IT Operations,Gartner

In 2015 analytics driven APM will mature. In 2014 we saw a growing trend of log analysis usage for better problem diagnosis. In 2015 this trend of search analytics will continue and will become more tightly integrated with APM. The other key area will be self-learning dynamic thresholds to predict problems beforehand rather than detecting them. Another space where analytics will become prominent is optimization of event noise through smarter event correlation. I also think analytics will evolve from detecting and predicting issues to prescribing recommendations and automated actions to resolve application performance problems.
Payal Chakravarty
Sr. Product Manager – APM, IBM

In 2015, real-time analytics will be a “need to have” not a “nice to have” for enterprises that compete based on the strength of their IT services. The companies that will thrive in todayâ’s instant access marketplaces are that can identify problems early and begin resolving them before they have huge adverse effects on customers.
Kevin Conklin
VP of Marketing, Prelert

We love to measure everything in the monitoring world, and 2015 is going to be no different. New advances in technology and the expansion of mobile devices will mean even more data to be collected, and with new data will come an expansion of analytics. We’re already seeing new analysis capabilities in the form of CDF charts and historical comparisons, and I’m anticipating being able to drill down even deeper into data – and integrate it into existing metrics – in order to provide the best possible look at the impact of performance.
Mehdi Daoudi
CEO and Founder, Catchpoint


Big Data has become almost a mainstream word. But, analytics for Big Data, not as much. In 2015 we will start to see the walls between business and IT begin to crumble (or at least further crack) as the business’ needs to rapidly analyze large volumes of data for perishable insights becomes paramount. In order to accomplish this we will need applications that can rapidly stream data for real-time analysis. APM will be used to ensure these newly critical applications perform effectively. In this year we will see the need for real-time, big data analytics drive the importance of APM as the business and IT collaborate to make this work.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

We have seen only the beginning of the M2M wave of big data associated with APM. It’s not only the volume that will go up, but the ways in which it flows. Systems must be able to cope with and normalize this multifaceted data in real time.
Vess Bakalov
Co-Founder and CTO, SevOne

I expect to see a new generation of IT operations analytics tools, based on blended analytics that can more proactively detect anomalies, predict outages, provide deep diagnostics and resolve issues within a real-time business context. By correlating various silo-sourced data (log, performance, configurations, security etc.), the next generation of IT Operations Analytics tools will be better positioned to sift through terabytes of operations data in real time, spotting and presenting issues to users in a more understandable context.
Sasha Gilenson
CEO, Evolven

From gleaning the product reviews on IT Central Station, I can share with you one of the top features that real users cite as lacking in current APM tools: deep analytics. I predict vendors will address this critical need in 2015.
Russell Rothstein
Founder and CEO, IT Central Station

Performance Management in Big Data will become a significant revenue and market capturing opportunity for major APM players in 2015.
Gary Nakamura
CEO, Concurrent


The gap between business analytics and IT analytics is quickly narrowing. In 2015, software analytics and business analytics will be viewed as one in the same and as a critical piece of business intelligence from stakeholders on both sides of the equation.
Maneesh Joshi
Sr. Director and Head of Product Marketing and Strategy, AppDynamics

In 2015, digital marketing analytics solutions will collide with APM analytics features. As APM solutions push upwards into the business with the value they provide then I expect analytics features to encroach on features provided by digital marketing analytics solutions (e.g. Google Analytics). Successful APM solution providers will coalign with digital marketing solution providers through strategic partnerships to shift the focus from just application performance to digital service performance.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research


In 2015, APM will break out of the hallows of the back-office and onto the CXO’s desk as it transforms a vast array of disconnected service and infrastructure data points into an analytics dashboard that is accessible to both line of business and IT users. This APM analytics dashboard will become a strategic weapon that better supports the business by ensuring business applications are optimized, highly available and accessible to anyone from anywhere.
Bill Berutti
President of the Performance and Availability Business at BMC Software

The focus on customer experience management will drive organizations to undertake end-end monitoring of all of their web, native mobile, mobile enabled web and API assets, using a single platform. These platforms will also necessarily evolve to become more “answer-centric”- with the ability to surface up differing levels of actionable insights and pertinent detail to a diverse group of stakeholders – business owners, IT/Ops personnel, QA engineers and developers.
Denis Goodwin
Director of Product Management, AlertSite by SmartBear

From a functional perspective, competitive pressure will drive an increased focus on accessibility, particularly from those vendors with high end APM solutions.
Larry Haig
Senior Consultant, Intechnica


The APM frenzy will start to expose shortcomings in terms of integrated insights into change management, capacity optimization and broader alignment with business values that will move the discussion closer to Business Service Management (BSM). This was my prediction last year — and I’ve already seen trends in this area, exacerbated by cloud and DevOps, among other things. Maybe 2015 will be the year when the industry finally takes a deep breath and recognizes the need for a new, more dynamic service-aware management system that’s truly cross-silo.
Dennis Drogseth
VP of Research, Enterprise Management Associates (EMA)

My biggest prediction for APM in 2015 is that it needs a name change. As digital experiences become the primary way brands engage with users APM is moving up the business stack. I’ve dubbed this Unified Business Monitoring.
Ken Godskind
Chief Blogger and Analyst,


In 2015, focus on cloud monitoring will continue to rise.
Karun Subramanian
Application Support Expert, is external)

For all of the talk about cloud diversity, the majority of the users in the market have been slow to adopt a cloud diverse development practice. The juggernaut cloud provider is still obviously AWS, as they reportedly hold 17 times the market share of their next 5 competitors put together. And when we talk to our users, their applications in the cloud are primarily running on AWS. In 2015 we should see true cloud diversity take hold, forcing many APM providers, who have prioritized AWS in development, to get on board with universal coverage for cloud providers. Those who have taken a “cloud-first” approach to developing their APM solution will find this transition much easier than those who have to re-engineer or cobble together a mixture of legacy and next-gen solutions that can span multiple physical and virtual environments.
Josh Stephens
VP of Product Strategy, Idera

In 2015, we expect APM will increasingly be focused on cloud performance management (CPM), as applications become decoupled, and components are distributed across public, private and hybrid could environments. Increased visibility into log-level analytics will be critical to APM as access to code and application metrics becomes increasingly untenable.
Andrew Burton
CEO, Logentries

In 2015, APM will expand to cover the growing ecosystem of SaaS applications that increasingly power modern organizations. Traditional APM has covered apps such as web apps and on-premise data stores, but as businesses continue to move to the cloud, APM will have to cover the intersection of applications built by the business and applications bought by the business. Distributed applications communicating with each other is increasingly the fabric of modern businesses, and 2015 is the year that APM steps up to monitoring the entire ecosystem.
Dan Kuebrich
Product Director, Application Performance, AppNeta

There are a lot of apps being developed and hosted in the cloud, and those “producers” need to monitor and manage their own app performance, but what about the customers – the enterprise IT and business operations teams purchasing and consuming these cloud apps and services; e.g., Office365,, Google Apps, Workday, DropBox, Expensify, etc.? There are a lot more of these SaaS app “consumers” than there are “producers” – and these consumers still own application performance management and still support users who expect them to maintain high application service levels regardless of where the app runs. It’s still Application Performance Management, but the requirements are fundamentally different and it’s this emerging need that will disrupt and reshape the APM landscape the most in 2015.
Patrick Carey
VP Product Management and Marketing, Exoprise


In 2015, the migration to 10G networks and the increasing adoption of virtualization will intensify the pressure on APM vendors. The massive amount of data to be analyzed will challenge the industry to combine deep transaction analysis with full details retention at L7 over millions of transactions. In addition, vendors not in a position to monitor transactions over virtual networks will be out the game.
Managing Director, SecurActive Performance Vision

2015 will see the need for Application Performance Management and Application Aware-NPM (AA-NPM) production tools to comprehend two new domains: what is going on inside virtualized servers (virtual machines [VMs] and virtual switches); and visualizing virtual networks, especially around OpenStack. Both of these technologies can have a large impact on production application performance and quality of service. By providing visibility between configuration changes in virtual servers and virtual networks with application performance changes, AA-NPM production tools will simplify the IT staff’s job understanding both unexpected events, as well as seeing if changes in the underlying infrastructure produced the results that were expected.
Mike Heumann
VP, Product Marketing and Alliances, Emulex

Seismic shifts are taking place in the enterprise, and those shifts mean that application and infrastructure performance management (IPM) must adapt to new realities. There’s no stopping the rampant adoption of mobility, cloud and hybrid cloud computing. The growth in virtualization of compute, storage and networking remains unfettered. And, adoption of web-scale computing is burgeoning and so is the DevOps organization. All of this means that the sophistication of performance management tools must evolve at hyper-speed. To keep pace, those tools will also have to be highly scalable. They must extend end-to-end, from the user to the backend infrastructure to include clients, servers, network and storage, as well as support virtualized workloads of all kinds, wherever they reside whether web, e-commerce or other business apps. Yes, that’s a tall order – but, not optional. The APM solutions of 2015 must become infrastructure-aware and the virtualization/IPM solutions must become application-aware!
S. “Sundi” Sundaresh
CEO, Xangati


In the next year, we’ll see companies spend more time and money looking at how they can optimize application performance from the ground up utilizing containerization technology, such as Docker. This trend will be evident across many application and technology types — from web applications to big data analysis engines.
Charlie Key
Founder of Modulus, a Progress company

As Docker continues to gain momentum in organizations adopting DevOps and cloud computing, APM will focus more on container-driven, microservices architectures in 2015. This shift away from monolithic to microservice applications will mean an even greater need for visibility into complex, distributed environments. As a result, APM will evolve to provide even richer data coupled with more powerful analytic capabilities.
Christine Sotelo
Product Marketing Manager, New Relic

Virtualization of servers, networks, and the abstraction of the entire resource infrastructure will challenge APM solutions to maintain operational visibility, reduce troubleshooting time and offer insight into how to optimize IT services. Our prediction for 2015 is that enterprises will ramp their orchestration efforts to achieve enhanced service delivery performance and business efficiencies. Service orchestration will enhance agility to incorporate dynamic application rollouts and the capability to deploy hybrid infrastructure architectures.
Brad Reinboldt
Sr. Product Manager, Network Instruments/JDSU

In 2015 container virtualization will provide the #1 solution for unlocking the promise of big apps. In 2015, containerization will move beyond just Linux (i.e., Docker), into the Windows world. Once there, Windows-based containerization will provide its users with a number of important benefits, such as the ability to dramatically increase application performance and mobility, simplify day-to-day management tasks – such as patch management and asset utilization optimization, ensure high availability (HA) and the operational integrity of the business, and consequently also deliver significant economic benefit across the entire enterprise.
Don Boxley
CEO and Co-Founder, DH2i

A big factor in the upcoming year will be the growth of SDN. As the infrastructure becomes application aware we will see a lot of value being derived from understanding and correlating the performance of applications with the underlying virtualization, server and network infrastructure. Data is only as good as the decisions it allows us to make, and with the flexibility inherent in SDN, we have a lot more options in how we scale and deliver our applications. In order to do effective APM, we must have a holistic view across the whole delivery stack.
Vess Bakalov
Co-Founder and CTO, SevOne


2015 will bring about a great divide within APM and its subcategories. While code-level APM will continue to increase adoption inside application development, a newer category described by leading analysts as Application Operations Management (AppOps) and Application-Aware Infrastructure Performance Monitoring (AA-IPM) will emerge due to the growing demand for visibility from those responsible for the shared infrastructure across the enterprise.
David Roth
CEO and Co-Founder, AppFirst

2015 will mark a significant shift in the way that APM tools are used by IT Operations teams. Driven by increased implementation of Hybrid-Cloud based applications and massively distributed applications, these teams will stop using APM tools as their go-to primary tool, opting for unified infrastructuure/application monitoring solutions instead. The APM tools will move into an integral code debugging solution for Developer-intensive DevOps processes.
Vic Nyman
COO and Co-Founder, BlueStripe


APM integration into the entire software development life cycle will be the standard for enterprises that want to stay agile. Development, testing and monitoring will be integrated at the core so that all three of these processes and supporting systems will work seamlessly together. In 2015, the software industry will have understood the benefits of development and operations working closely together as the DevOps movement continues to take hold. Testing will be an integral part of the mix so that all APM solutions will integrate with Continuous Integration, Continuous Delivery and Continuous Testing solutions.
Alon Girmonsky
Founder and CEO, BlazeMeter

In 2015, APM tools will evolve to enable a better DevOps culture. Integration of APM tools with deployment tools, visualization of pre-deploy and post-deploy performance patterns and automated actions on deployment in response to performance degradation – will be key enhancements in APM tools to aid the DevOps culture. Code level diagnostics in development as well as production environments in the enterprise will become common place. Collaborative problem solving using virtual war rooms will also gain ground to help Developers, Operations and other parties to work smoothly through problem diagnosis.
Payal Chakravarty
Sr. Product Manager – APM, IBM

2015 will be the year that the DevOps tool conversation expands beyond its current (almost singular) focus on configuration automation tools like Puppet and Chef to embrace the fact that collaboration across teams and tools is equally critical to DevOps transformation. As enterprise DevOps efforts expand beyond pilot projects with teams located in the same physical office, organizations will find that SharePoints, emails, conference calls, and instant messaging don’t scale and aren’t effective to aligning distributed teams and tools to support the flow that DevOps is intended to enable. Collaboration capabilities will be increasingly added to existing DevOps-oriented software products. Solutions that enable collaboration across development, project management, and IT operations tools and teams will be sought out and adopted by the many organizations who will struggle with the “uber change” and “uber collaboration” imperative that DevOps represents.
Matthew Selheimer
SVP of Marketing, ITinvolve

Dev teams are finding ITOA invaluable to quickly determine whether problems are due to their code or to something else, e.g. the cloud infrastructure. In 2015, ITOA is predicted to become even more correlative, in terms of not only correlating according all performance and availability data across the IT stack, but relating it with change management data (e.g. from automated code deployment and release management tools), as well. This is going to be critically useful as the large majority of performance and availability issues are caused by changes. This added correlation will also predict the increased deployment of such ITOA tools into the pre-production/QA stage so they can also find potential problems earlier. It is predicted that this “merging” of ITOA tools from pre-production/QA to production uses will become more common.
Phil Tee
Chairman, CEO and Co-Founder, Moogsoft

High profile application performance issues in 2014 – such as – drove a scramble to understand application performance and institute discipline around DevOps. In 2015 these disciplines are going to become an ante – they will be part of every major project’s stage gate for release. This doesn’t mean that we will settle on standards or that all rollouts will take equal advantage of the tools available – but CIOs and business teams will insist on having performance metrics as part of the go/no-go decision matrix. This is the start of moving the basis of the IT conversation away from availability and towards performance – which ultimately will lead to better results for our customers.
Mark Swanholm
Chief Strategy Officer, Performance Tuning Corporation


Banks will upgrade their APM capability in response to an increasing focus on application availability by financial regulators. The provision of online banking has long since moved from a nice-to-have to a service level expectation. In Europe there have been fines in 2014 for banking application down time and in other parts of the world expectations for application availability are being set in regulatory stone, for example, the new MAS TRM guidelines (Monetary Authority of Singapore Technology Risk Management).
Bob Tarzey
Analyst and Director, Quocirca

Big Data: Concurrent CEO Suggests a Pragmatic View for Data Projects in 2015

Dick Weisinger

Where will the Big Data industry be going in 2015? Gary Nakamura, CEO of Concurrent, has published his list of predictions about the direction that Big Data will take. This is the second year in a row that Nakamura has offered up predictions.

Nakamura said that “this year every company is in the business of data, and this will drive the demand for cost effective and scalable Big Data platforms higher than ever before. As the market continues to catch up to the hype, 2015 will be the year that Hadoop becomes a worldwide phenomenon. As part of this, expect to see more Hadoop-related acquisitions, IPOs and the rise of new jobs.”

Nakamura’s predictions made a year ago today for Big Data in 2014 were as follows:

  1. Expect to see more funding of Big Data companies and potentially a significant IPO
  2. More Hadoop projects will fail than will succeed
  3. Big Data Projects will be an increasingly important part of business processes, leading to a need for Big Data project managers
  4. Big Data will become more about the apps that use big data than the data itself
  5. Big Data will be everywhere, but will continue to be convoluted and confusing.

This year, Nakamura again has predictions.  Going into 2015, Nakamura expects to see the following trends in the new year.

  1. Given the number of Big Data failures in 2015, companies will be more pragmatic in matching Big Data to the right problems
  2. The use of MapReduce, because people have become familiar with it, will continue to outnumber other Big Data options, even newer ones like Apache Spark and Tez
  3. Increasingly, Java Enterprise developers will see their skillset in high demand to work on data projects
  4. Hadoop adopters will be looking closely in 2015 to measure their success and return on investment of their projects
  5. “Elephants will fly” — Hadoop will make a push to become a worldwide phenomenon

Stripe Open Sources Tools For Apache Hadoop

December 9, 2014
Alex Giamas

Stripe, the internet payments infrastructure company recently announced open sourcing a set of internally developed tools based on Apache Hadoop.

Timberlake is a dashboard for Hadoop jobs. Written in Go with a React.js frontend it improves on existing Hadoop job trackers. By providing waterfall and boxplot visualizations for jobs, one can figure out easier what makes a Map Reduce job slow. Timberlake plays well with Scalding and Cascading and can visualize their flows. Timberlake only works with the YARN Resource Manager API and has been tested on v.2.4.x and v2.5.x .

Brushfire is a framework for distributed supervised learning of decision tree ensemble models in Scala. Based on Google’s PLANET, it’s built on top of Hadoop and Scalding. Brushfire can process classification tree learning algorithms in a scalable way, using commodity hardware. Brushfire can build and validate random forests from large sized training data.

Sequins is a dead-simple static database. It indexes and serves SequenceFiles over HTTP, so it’s perfect for serving data created with Hadoop. It’s a simple way to provide low-latency access to key/value entries generated by Hadoop jobs.

Finally, Herringbone is a suite of tools for working with parquet files on hdfs, and with Cloudera Impala and Apache Hive. Stripe uses extensively Apache Parquet for efficient columnar storage. Stripe uses Cloudera Impala with Parquet, and Herringbone is essentially a set of command line interface tools for more productive development.

With Apache Hadoop 2.6 just being released and several big technology companies either contributing to Hadoop development or open sourcing tools from their internal development stack, future looks bright for Apache Hadoop.

Banishing the Confusion of Eight Big Data Myths

December 9, 2014
Chris Preimesberger

Enterprises of all types and sizes are realizing that data sets being stored or archived in silos or in clouds—information they might have had considered too old or irrelevant, or only for regulation purposes—may have great potential value. It’s all about looking at a business’ history, making cogent queries, discovering insights and projecting what is likely to happen in the future, in order to become more customer-centric and inventory-effective. These companies are going into the internal business of analyzing data. As a result, organizations are in search of the necessary tools and information to take full advantage of the potential this movement offers. However, big data brings big hype, and big hype only brings big confusion of what’s what in the data market. In this slide show, eWEEK and Gary Nakamura, CEO of data application infrastructure provider Concurrent, discuss—and dismiss—the biggest myths that are disrupting the big data industry. Some of what turns out to be a myth may surprise you.

Myth 1: We Must Hire a Hadoop Expert

Hadoop is built on intricate concepts such as MapReduce, YARN, Spark and Hadoop Distributed File Systems (HDFS), and the constant change and announcements of subsystem-level technology further convolute the picture. Plenty of products and tools reduce the complexity and shield users entirely from this. There are open-source application frameworks and commercial products that significantly improve productivity and accessibility when working with Hadoop, up to the point where companies can use internal resources to execute on their big data strategy: enterprise Java developers, data warehouse developers and data analysts can quickly and easily leverage Hadoop.

Myth 2: Buying a Big Data Solution Means I’m Using Big Data

You’ve just convinced your organization to adopt a big data strategy, and you’ve purchased a solution. What’s next? Enterprises often get stuck at a point where they have the hardware and Hadoop software in place but don’t have the skill set to take advantage of it. Using big data means that you are using your data, executing a data strategy and helping your business with cost savings, revenue opportunities or additional insights. The key is lowering the bar for your organization to execute and deliver data products as quickly as possible. Delivering and running these production applications reliably and on time is the next set of challenges. When you achieve this level, you will know because your users will want more.

Myth 3: Big Data Is a Fad That Will Go Away in a Few Years

Ninety percent of the world’s data was created in the last three years. Sticking your head in sand and hoping that it will go away is a career-ending move. We may drop the “big” in big data in a few years, but whether you like it or not, your company will be in the business of data.

Myth 4: Businesses Need One Data Scientist for All Big Data Needs

For too long, businesses have been upholding the myth of the data science hero—the virtuoso who slays dragons and emerges with a treasure of an amazing app based on insights from big data. The truth is they can’t afford to rely on a single data scientist or developer because employees can leave an organization at any time. By building a “big data app factory” of processes and teams, companies can ensure that great work can be done over and over again—regardless of personnel changes.

Myth 5: Traditional Enterprise Data Warehouses Will Go Away

It’s unlikely that the technology of the past will completely go away. Enterprises will continue to rely on traditional enterprise data warehouses (EDWs). However, with the rapid evolution of Hadoop and accompanying products and technologies, the role of the EDW in the enterprise will significantly diminish. The flow of data will change, and it’s likely that Hadoop will be its first stop.

Myth 6: Apache Spark Is the Future of Hadoop

As usual, the new, sexy young object is always the most alluring. Apache Spark is currently one of those: It is a fast and general engine for large-scale, clustered data processing. However, rest assured, another will come along and take its place as the hottest thing on the market. What people often forget is that old reliable is old and reliable for a reason, as it usually has the breadth and depth needed to move your big data project forward. Resist the urge to move to the latest; if it ain’t broke, don’t fix it. Stick with what you know.

Myth 7: Big Data Is Only for the Largest of Enterprises

The “big” in big data is misleading. Everyone—including organizations large and small—is in the business of data. Sure, large enterprises collect massive amounts of data, but the abundance of data that small enterprises can collect and leverage for competitive advantage also can be immense. Just because your data may be small in volume does not mean you shouldn’t have a data strategy in place.

Myth 8: Big Data Is for Hadoop Experts

Enterprises today are rapidly adopting Hadoop to process, manage and make sense of growing volumes of data, and enterprises are now leveraging existing internal resources to drive their data strategies forward. There are now mature, reliable tools readily available for all software engineers to use to unlock the full potential of big data and Hadoop. As a result, no Hadoop expertise is required.

NoSQL Databases: Niche Tools Slowly Taking Their Place in the Enterprise

December 8, 2014
Dick Weisinger

NoSQL databases store and retrieve data using techniques other keeping data in structured tabular format like that used in traditional relational databases.  Compared to relational databases, NoSQL databases are often simpler in design, can scale more flexibly, and enable very fine-grain access to data.

While NoSQL databases dominate database technology news, actually NoSQL databases have a very low presence across enterprises.  A 2014 InformationWeek survey  to determine which database technologies are used by enterprises found that only 13 percent of organizations surveyed have installed Hadoop, 5 percent use MongoDB and 3 percent have SAP HANA licenses.  Compare that to 75 percent of organizations using Microsoft SQL Server and 47 percent using Oracle RDBMS.  Joe Masters Emison, CTO of BuildFax, comments that even FileMaker currently  still has a higher adoption percentage than highly-hyped NoSQL contenders Cassandra, Riak, and MariaDB.

Forrester Research estimates that NoSQL adoption at enterprises actually to be significantly higher, closer to being used by 20 percent of enterprises.  Forrester also expects the use of NoSQL to double by 2017.

Forrester identifies four use cases where NoSQL shine:

  • Operational databases for real-time and predictive analytics
  • Stream processing that scales across many nodes in a clustered configuration
  • Databases with low-latency ad hoc queries
  • Applications that require large volumes of rapidly growing structured and unstructured data

Gary Nakamura, CEO of Concurrent, said that “sometimes SQL is overkill for what you are trying to do. You don’t need a query language, just a key with a particular value, and that could be a lot simpler and faster.”

Alex Miller, Developer at Cognitect, said that  NoSQL “models let the developer start working with their data without committing to anywhere near as much up-front structure as a relational database. On initial usage, that makes them feel much lighter-weight and agile from a developer point of view.  The important thing is to consider your data use case and find the most appropriate technology to match it.”

Even With 20 Years in Tech I Learned These Lessons as a First-Time CEO

December 8, 2014
Gary Nakamura

After working as an executive in the high tech industry for more than 20 years, I’ve learned my fair share of lessons along the way. After completing my first year as a first-time CEO, a startup nonetheless, it’s become clear that growing, managing and running a company is no different.

However, like the old adage says, “you live and you learn.” That has certainly been a constant throughout the trials and tribulations of my first year at Concurrent, Inc. As I look back, here are the five most important lessons I’ve learned as a first-time CEO:

1. Check your ego at the door.

As CEO, I am focused on building a company around the vision of our founder and the talent of the people we put around him. This means I need to keep my ego in check everyday when I walk into the office so that the focus stays on “we the team” rather than on “me the individual”.

Related: 8 Ways Rookie CEOs Can Succeed Faster

2. Planning ahead is your job.

In my spare time, I like to garden, and I often think of a company like a garden. Sometimes the seeds you plant won’t bloom until much later, but the effort put forth is still worth it. On the flip side, plants often die – you did something wrong, such lack of or excessive watering, detrimental plant placement, etc., but you learn from your mistakes and move on.

The same can be said when an initiative or project fails at your company, and – trust me – this will happen. Great CEOs, like gardeners, seek knowledge, plan ahead, are patient, are disciplined and are persistent. As a CEO, you need to think long term and big picture.

3. I am human – therefore, I “shave the peak.”

The job demands of a CEO, at times, require superhero abilities, and with the job demands, come the pressures of the job. But it’s your responsibility to keep these situations and stress in check. Remember that we’re still human.

There are always a million things that can keep you up at night, but to survive in the long run, you need to counter that stress and anxiety by taking care of yourself. I call that “shaving the peak.” I achieve this through daily exercise, but there are many other ways to clear your head and unwind. Find your way and stick with it.

Related: Mindfulness and the Startup CEO

4. Don’t be a penny wise and a pound foolish.

Startup life is very different from life at a big company (read: big budget). You have to keep your spend in check. However, being penny wise brings a danger of focusing on the wrong things and not spending more where it really matters.

Some things are definitely worth the extra expense if it makes your product better, your customers happier and your employees feel more appreciated and more productive. Only a fool will cut back on something to save in the near term, rather than keep the big picture in sight.

5. Don’t babysit.

At any company, there’s a mixed bag of personalities, plenty of opinions and times when team members at every level don’t see eye to eye. As a CEO, it’s important to know when to step back and ensure the team works together to figure things out, rather than hold everyone’s hands as they sort through their differences.

Every CEO is different, and so are the failures and successes from which they learn. However, whether you’re just starting out or well seasoned in the position, there are new opportunities from which to grow and learn every day. Being a (first-time) CEO is hard work and challenging, but with a level head on your shoulders, a good understanding of what is important (and what is not), and strong and supportive team on your side, it is an exciting and fulfilling ride.

Concurrent Releases New Version of Big Data Application Performance-Monitoring and Management System

November 18, 2014

Hadoop is still a young technology and working with it can be difficult for enterprise organizations.  To help alleviate the challenges, Concurrent has announced the latest version of Driven, a big data application performance-monitoring and management system.

Driven is purpose-built to address the challenges of enterprise application development and deployment for business-critical data applications, delivering control and performance management for enterprises seeking to achieve operational excellence.

As a company, Concurrent is mainly focused on app development and app management. Concurrent defines apps as business IPs that come together with the data and create a competitive advantage for how a business runs. Concurrent was initially founded around an open source framework called Cascading. The purpose of Cascading is to make it easier for users to build data oriented applications on top of Hadoop. Cascading has performed very well to the tune of 285,000 downloads per month.

Driven was spawned out of the need to manage many of applications that users created with Cascading on top of Hadoop.

When business users create these applications, many of the platforms for Hadoop do not offer performance management of the applications as they run. Driven is the tool that business users can apply to their applications to view its performance as it runs.  According to Concurrent, these data pipelines can be thought of as supply chains. The output is the data product that an analyst or data scientist consumes at the end. “Driven is focused around the most minute detail. I can tune it at a very fine-grain level, but I can zoom back out and look at it from a specific application level,” explained Chris Wensel, founder and CTO of Concurrent.

The newest version of Driven includes a deeper visualization into data apps, a metadata repository, and new segmentation techniques. The deeper visualization allows users to debug, manage, monitor and search applications more effectively and in real time. The metadata repository is a scalable, searchable, and fine-grained metadata repository that easily captures end-to-end visibility of data applications. By having a complete history of the applications telemetry users are able view the applications performance since its inception. New segmentation support provides greater insights across all applications. Users have the ability to segment applications by tags, names, teams or organization, and easily track for general Hadoop utilization, SLA management or internal/external chargeback.

New products of the week

November 17, 2014
Ryan Francis

Visit the link above to see the full slideshow. Driven is featured on the fourth slide.

Key features: The latest release of Driven offers enterprises unprecedented visibility into their data applications, providing deep insights, search, segmentation and visualizations for SLA management while collecting rich operational metadata in a scalable repository.

Cascading Backer Boosts Hadoop App Performance Management

Doug Henschen
November 6th, 2014

With Hadoop quickly emerging as an applications platform as well as a big data-processing environment, Concurrent is broadening its Driven application performance-management system to monitor and manage a variety of data-centric applications.

Concurrent is the commercial vendor behind open source Cascading, arguably the most popular big data application-development option going — after native coding on separate platforms. Driven is Concurrent’s commercial product, but it’s not a souped-up version of Cascading. Rather, Driven is a separate big data application performance-monitoring and management system.

Where Hadoop vendors and analytics platforms like Apache Spark have their own management consoles that look at the health and performance of their clusters, Driven monitors and helps troubleshoot the performance of data-driven applications across multiple platforms and environments. That could be various Hadoop distributions or emerging systems like Spark, Storm, Tez, or other analytic platforms.

“Those other management consoles focus on the data fabrics where Driven focuses on the applications,” said Chris Wensel, founder and CTO of Concurrent, in a phone interview with InformationWeek. “We bring visibility to the version, the developer, and the process owner, and we help you understand what the application does, what libraries it depends upon, and most importantly, how it interacts with upstream and downstream applications.”

Where other management systems might help with post-mortem analysis, Wensel said Driven lets developers, operations, and line-of-business staff visualize myriad apps running on clusters and measure growth in demand, by app and by business unit, over time. And when applications fail, Driven is designed to surface how that will impact other applications so specific jobs can be killed or rerun and users or customers can be notified if there will be disruptions.

The first release of Driven, which came out in June, supported monitoring and management of Cascading, Scalding, and Cascalog applications, but with this week’s 1.1 update, Concurrent is adding support for Hive and bespoke MapReduce applications. Despite the emergence of multiple SQL-on-Hadoop options and MapReduce alternatives, these two options are still doing the bulk of the heavy lifting in Hadoop environments.

“Everybody wants to get to the next thing that will be faster than MapReduce, but they probably won’t go there for another two years because MapReduce works, they understand it, and they know the operational risks,” said Gary Nakamura, Concurrent’s CEO.

The combination of Cascading and Driven will let big data practitioners keep applications running and well managed, yet requirements for change to those apps will be minimal if they end up switching from MapReduce to alternatives like Spark or Tez, Nakamura said.

Other upgrades in Driven 1.1 include deeper visualizations for monitoring, managing, and debugging applications; search capabilities designed to quickly spot problematic applications; timeline visualizations to track app utilization trends; and app-segmentation support by tags, names, teams, or organizations so teams can track service-level agreement compliance and Hadoop utilization for internal or external chargebacks.

Concurrent backs the open source Cascading big data application development platform. Driven, pictured above, is its commercial app performance-management system.<

Driven has been generally available for only four months, so Wensel said it’s no surprise there are fewer than a dozen customers at this point. The only publicly identified Driven customer is the Dutch email advertising optimization vendor Mojn.

“With Driven, our developers have unmatched operational visibility and control across all Cascading applications — including real-time monitoring, history and performance tracking over time,” said Johannes Alkjær, lead architect at Mojn, in a statement from Concurrent. “Driven [lets us] drive differentiation through our data and manage our data applications more efficiently.”

Concurrent is counting on the popularity of Cascading to drive interest in Driven. There are more than of 8,000 production deployments of Cascading (including uses at Twitter, United Healthcare, Etsy, and Nokia), and the software is getting more than 285, 000 downloads per month, according to Concurrent.

Cascading owes its popularity to the fact that it abstracts developers from the complexities of Hadoop programming so they can write once and deploy across multiple distributions and generations of distributions. Concurrent does the work making sure its platform stays up to date and compatible with multiple big data platforms as they evolve.

Cascading has been certified to work on multiple distributions and works with the YARN resource management framework. Concurrent also offers beta Cascading software and is preparing future production releases that will support for Spark, Storm, and Tez as they become generally available.