DevOps practices are more popular today than ever before. Read this to discover how to further the emergence of DevOps culture in your...
You are here
Software is Changing
Because software defines the leading edge of what we can do with technology, it has always been a domain of innovation. But until fairly recently, there was a general sense, even among technologists, that profits were still mostly driven by the long tail of what’s possible in the material world.
Now, as Marc Andreessen, the founder of NetScape, famously wrote in a 2011 essay in the Wall Street Journal, “software is eating the world.” There’s a new, and increasingly widespread understanding that software drives everything; and a concomitant sense that much material apparatus and process – the mobile phones, the taxis-on-demand, the same-day delivery – is just packaging for software, which creates most of the value. In that essay, Andreessen noted that savvy legacy businesses were rapidly trying to climb the value ladder and redefine themselves as software companies. They had to do this, he explained, because otherwise they’d be disrupted by fast-moving, software-based startups like Amazon.
Hype? Sure, some. But also, as the last half-decade (and especially the last two years) has abundantly proven, pretty much spot-on. Software really is “eating the world.” And as a result, software itself is changing.
These changes are happening everywhere, all at once, lured by the promise of new technologies and the business models and profits they potentially enable. The ubiquitous web, mobile apps, the Internet of Things (IoT), high-bandwidth streaming, Big Data, machine learning and other contemporary computing paradigms make novel and dynamic demands on compute and network resources quite unlike those made by previous generations of applications.
In response, the underlying substrates where software lives are rapidly evolving away from manually-configured hardware aggregations towards software-configurable, automation-driven, and autonomously-responsive (self-scaling, self-healing) cloud infrastructures – infrastructures with their own on-demand economics, presenting new opportunities for cost optimization and arbitrage.
New programming languages, toolchains, application paradigms, software engineering, automation and monitoring best-practice are emerging to solve domain-specific problems and exploit opportunities presented by these new kinds of applications and cloudy substrates.
The bottom line is economic. If software is what produces value, we need to produce better software, faster. If software is what attracts investment – or, within organizations, obliges expenditure – then we need to rearrange things (procedural and technical) to limit avoidable commitments (shifting CapEx to OpEx) and ensure more predictable, continuous returns. Exhaustively planned, half-year or full-year-long release cycles need to be replaced by many shorter, less disruptive sprints that bring valuable features to users quickly and incorporate user guidance in prioritizing next steps. IT – formerly a cost center whose practitioners were preoccupied with maintaining physical infrastructure – needs to become DevOps: a fusion discipline that uses automation (more software) to accelerate software delivery, and make software more reliable on public and private cloud infrastructure. And infrastructure monitoring, long focused on flagging failing systems health, needs to become proactive – pairing with application performance monitoring (APM) to help DevOps document, explore, and optimize business service availability.
Get unified insight into your IT monitoring with Opsview
We want to learn and keep pushing forward our monitoring capabilities to solve monitoring challenges as they emerge.
The key things you need to know when comparing Nagios to other IT monitoring solutions.