You routinely will witness the ‘everyone wants to be the architect’ behaviors, especially in large multi-disciplinary initiatives such as data centre modernisation, cloud architecture, or virtual infrastructure deployments.
Admittedly, architects carry a lot of weight. Driving the technical architecture and keeping the business onboard is no job for slackers and has a lot of perks. Yet, one bad architect, or too many architects, or pseudo-architects (the worst) drive a lot of great ideas straight into the ground.
So how do you tell the difference? Here are a few telltale traits:
The last time I was genuinely excited about ‘free’ was when Google sent me 100 bucks of free advertising on Adwords. The time before that was when EMC hooked me up with a nice little USB stick at the EBC.. Thanks again Joe, I still use it constantly. Free gadgets are one thing (more!), but free services is another.
Through the course of the global financial crisis, there’s been a vapid shift in the IT infrastructure consulting market to provide ‘free consulting’. The loss leader approach to winning business has been around forever, but the more the ‘free consulting practice’ proliferates in IT infrastructure the more you have to wonder what is really being provided. Various books and management articles expound the brilliance of free, and possibly the absolute requirement for free to compete in our new global supersonic economy.
Makes sense if the bench is warm, pipeline is light, yet you don’t want to see 10 years of talent vanish due to P&L pressures for short-term gains and long-term painful rebuilding. For hardware vendors, you have a magic pool of ‘marketing development funds’, which are applied feudal style to strong upside product win opportunities (competitive accounts, account salvage ops, etc.). But for pure consultancy, you might have bench cycles to apply, or you might take an opportunity loss to deliver ‘free’ services.
A couple challenges with free consulting services:
- Free means no budgeting approval, therefore no real effort on the part of the buyer
- Free results in little to no emotional commitment from the buyer or the business
- Free diminishes stakeholder involvement (often related directly to issues that need the most attention)
- This sets an ugly stage, since the underlings then have zero interest in playing-ball
- Free devalues the work being done and casts a cheap light on the deliverables, no matter how well crafted
Any way you cut it, Free Equals Nothing in the services industry. You must have skin in the game at some point, some time, some where, with somebody in the food-chain. Otherwise, free is nothing more than a LOST leader, a waste of time and talent.
Like many good things in the US, cloud computing originated on the West coast inside the innovative web giants Google and Amazon. Both offer forms of storage and application services. Google aims today at the consumer and end-user with the ever-expanding office-type applications (Calendar, documents, mail, etc.). For many organizations (such as the city of Los Angeles) Google replaces the traditional Microsoft desktop application suite. Amazon is geared more for advanced consumers (like web developers) and the enterprise with a range of increasingly data center friendly storage and compute services.
An ever-increasing list of cloud-storage focused startups are on the scene, and now the major vendors are making entries to market, slowly but surely. Don’t forget the Telecom’s like AT&T, who like EMC, are closely modeled after the Amazon cloud model.
The public cloud model threatens the premise of the traditional enterprise hardware and software industry, so we are beginning to see the major infrastructure vendors positioning for the private cloud market, as seen with the EMC/Cisco/VMware joint venture cloud services company Acadia and a slew of similarly minded partnerships.
Indeed, the cloud is a confusing marketing fueled circus at the moment, but I do think the impact on traditional infrastructure roles will be significant over time. I personally don’t think the private cloud is a cloud (I like to call it virtualized infrastructure with good engineering), and like some people would suggest, “not everything is a cloud“. What I’m talking about is the impact of public cloud infrastructure services on general IT infrastructure.
So here’s a preliminary speculation on the cloud impact to infrastructure roles:
- Tech Cycle One (2010-2012)
- Storage cloud services evolve for archive, file, and disaster recovery
- Compute cloud services mature to data center ‘friendly’ offerings
- Flurry of device and software entries to market to solve ‘first mile problem’ associated with cloud archive and storage (this is something I plan to write about soon)
- Fewer system administrator type roles required for small-medium businesses and startups (via cloud compute)
- Early adopters of large enterprise storage and compute services – primarily x86 and secondary storage, but limited impact to headcount
- Tech Cycle Two (2013-2015)
- Storage cloud services mature, consolidations and major acquisitions begin
- Compute cloud services become standard for small business and adoption begins in mid-market
- Significant reduction of standard back-office support (files, email) for small-medium businesses and startups
- System administrator type roles required for small-medium businesses and startups become less common place, in some cases not required at all
- Initial impact to large enterprise storage and compute services – primarily x86 and secondary storage, possible redux in skill requirements
As a practitioner, this is a good time to track the trends and stay ahead of the shift. This one may shake up the status-quo management model we’ve enjoyed for the last 15 years in distributed computing.
There’s an interesting trend with storage infrastructure outsourcing, where outsource vendors on-board capital assets and then gradually run them into the ground. You see this over and over, especially in long-term out-sourced environments (IBM, EDS/HP, Perot/Dell,EMC, etc.) – the onboarded hardware/software estate (which is rarely in great shape) evolves into a living graveyard over 3-5 years. The root cause boils down to contract economics:
- Outsourcer strikes a deal, commonly ‘your mess for less’ and onboards assets and management responsibilities
- Contract is structured around data volume / growth metrics (nothing to do with efficiency)
- Legacy designs are grand-fathered in and ongoing growth is cobbled onto existing asset base
- Over time, the asset base ages, hardware reaches end-of-life and software runs out of support
- New technologies mean architecture and planning work, not typically budgeted for as ‘project work’ in outsource contracts
- New technologies require upgrades and migrations, again not typically budgeted for due to competitive price pressures
- And time marches on and the outsourcer keeps up with operational needs, and architecture/planning is best-effort based
The problem with offloading infrastructure assets to an outsourcer is that there’s absolutely no incentive for the outsourcer to run lean or maintain a modern estate. The result is at first, negligable, but over time incredibly risky. Here are some observations of the down-side to letting your infrastructure run into the ground:
- Operational risk – mean time between failure metrics are only published for the duty-cycle / life of devices, not the after-life!
- No disaster recovery – when your asset base is out of support, non-standardized, and new ’0ne-offs’ are added to the mix, disaster recovery is a long-shot
- Skills drain – so if the people supporting the living museum of hardware/software never learn anything new, are you really getting service or are you paying the bill for ‘not so innovative’ people and services?
- No way out – there is absolutely a breaking point where upgrading no longer becomes an option, and your only choice will be a costly and potentially complex ‘green-field’ deployment
Bottom line – it’s best to own your infrastructure and pay other companies to run it. If you can, hire good architects and planners, and worst case, hire them on a project basis. This way, you can at least control the design and potentially avoid a mess, while applying contractual pressure to the outsourcer to innovate and use modern technology.
Last week I had the pleasure of visiting São Paulo – SP for a week of work. In spite of what my colleagues thought I’d be up to, it was a full-on work week and great experience. The 15 hour overnight trip from Boston to Sao Paulo definitely is a grinder, even with the modern-day luxuries found on economy class (such as the coveted 2-seat rows in the back of the 777).
On the whole the experience working in Sao Paulo was brilliant. People are good to work with, the economy is blazing, and the scale of the business and cultural terrain is mind-boggling. Some observations in general:
- Relationship is central to business, a requirement for business
- Lunch is a significant part of the day (working hours are long, but a big lunch rounds out the 9-7 or 8 routine
- Excessive rations of expresso are not only tolerated, but encouraged in the workplace
- Being on-time is important, and the average trip ‘across town’ requires 90 minutes lead time with traffic being unpredictable
- Delays are common, and every schedule is subject to change at last minute
- Delays are accepted, but regardless #4 must be closely observed
- IT infrastructure is generally the same as in the US market, however IT infrastructure services are relatively new, and emerging technologies are roughly 18 months behind the US enterprise market in terms of adoption (and that’s a good call given the regional vendor architect designs)
- Hardware/software vendors have a great deal of command over customer decisions and architecture
- The market is ripe for advancement of IT infrastructure services, mainly due to accelerated growth of industry and business
All this points to a great market for IT services, but there are most certainly economic barriers to entry for non-national firms. Coupled with the regular currency differences, Brasil levies >40% tax on any imported services. I appreciate the concept, in comparison to the US market with there have been virtually no incentives for retaining local/regional IT services and few if any barriers to offshoring. It’s not an outrageous levy, but provides a basic incentive to look within before shopping globally. This bears the question of whether or not long-term labor arbitrage in the US will create a skills shortage in IT that could persist for a generation. So as the US IT skills base is being bartered against the lowest-common bidder for many multi-nationals, Brasil in comparison is in a high-growth mode, has an abundance of talented young resources, has a government advocating both offshore trade of in-country services, and in parallel advocating skilled labor development. Not bad policy for a developing economy, maybe we should take some notes in the US.