Now anybody who's spent time working with me on my companies' global storage BOMs will understand that this is a major issue for me, and not something that is getting any easier. The issue is a complex one :-
- The €/Per GB ratio becomes more attractive the larger the capacity within an array (as the chassis, interfaces, controllers & software overheads get amortised over a larger capacity) - however of course the actual capex & opex costs continue to be very sizeable and tricky to explain (ie "why are we buying 32TB of disk for this 2TB database??")
- As the GB/drive ratio increases, the IOPS per individual drive stays relatively consistent - thus the IOPS/GB ratio is on a slow decline, and thus performance management is an ever more complex & visible topic
- IT mngt have been (incorrectly) conditioned by various consultants & manufacturers that 'capacity utilisation' is the key KPI (as opposed the the correct measure of "TCO per GB utilised")
- DC efficiency & floor-space density are driving greater spindles per disk shelf = more GB per shelf
- Arrays are designed to be changed physically in certain unit sizes, often 2 or 4 shelves at a time
- As spindle sizes wend their merry way up in capacity the minimum quantity of spindles doesn't get any less, thus the capacity steps gets bigger
- Software licences are often either managed / controlled by the physical capacity installed in the array, or in some random unit of capacity licences key combination - these do not change re spindle sizes
- Naturally this additional capacity isn't 'equally usable' within the array - thus a classic approach has been to either 'short stroke' the spindles or to use the surplus for low IO activity. However in order to achieve this you either have to have good archiving and ILM, or need to invest in other( relatively sub-optimal to application ILM) technology licences such as FAST v2.
- Of course these sizes & capacities differ by vendor so trying to normalise BOM sizes between vendors becomes an art rather than science
So what does this all mean?
- Inevitably it means that the entry level capacity of arrays is going up, and that the sensible upgrade steps are similarly going up in capacity.
- We are going to have to spend more time re-educating management that "TCO per GB utilised" is the correct measure
- Vendors are going to have to get much better at the technical size of software & functionality licensing that much more closely matches the unit of granularity required by the customer
- All elements of array deployment, configuration, management, performance and usage must be moved from physical (ie spindle size related) to logical constructs (ie independent of disk size)
- Of course SNIA could also do something actually useful for the customer (for a change), and set a standard for measuring and discussing storage capacities - not as hard as it might appear as most enterprises will already have some form of waterfall chart or layer model to navigate between 'marketing GB' through at least 5 layers to 'application data GB'
- Naturally the strong drive to shared infrastructure and enterprise procurement models (as opposed to 'per project based accounting') combined with internal service opex recharging within the enterprise estate will also help to make the costs appear linear to the business internal customer (but not the company as a whole)
- The real part though will be a vendor that combines a technical s/ware & h/ware architecture with a commercial licence & cost model that actually scales from small to large - and no I don't mean leasing or other financial jiggery pokery
No comments:
Post a Comment