Big Tin

Big tin: IT infrastructure used by organisations to run their businesses. And other stuff too when I feel like it…

Hard disks and flash storage will co-exist – for the moment

When it comes to personal storage, flash is now the default technology. It’s in your phone, tablet, camera, and increasingly in your laptop too. Is this about to change?

I’ve installed solid-state disks in my laptops for the last three or so years simply because it means they fire up very quickly and – more importantly – battery life is extended hugely. My Thinkpad now works happily for four or five hours while I’m using it quite intensively, where three hours used to be about the maximum.

The one downside is the price of the stuff. It remains stubbornly stuck at 10x or more the price per GB of spinning disks. When you’re using a laptop as I do, with most of my data in the cloud somewhere and only a working set kept on the machine, a low-end flash disk is big enough and therefore affordable: 120GB will store Windows and around 50GB of data and applications.

From a company’s point of view, the equation isn’t so different. Clearly, the volumes of data to be stored are bigger but despite the blandishments of those companies selling all-flash storage systems, many companies are not seeing the benefits. That’s according to one storage systems vendor which recently announced the results of an industry survey.

Caveat: industry surveys are almost always skewed because of sample size and/or the types of questions asked, so the results need to be taken with a pinch – maybe more – of salt.

Tegile Systems reckons that 99 percent of SME and enterprise users who are turning to solid state storage will overpay. They’re buying more than they need, the survey finds, at least according to the press release, which wastes no time by mentioning in its second paragraph that the company penning the release just happens to have the solution. So shameless!

Despite that, I think Tegile is onto something. Companies are less sensitive to the price per GB than they are to the price / performance ratio, usually expressed in IOPS, which is where solid-state delivers in spades. It’s much quicker than spinning disks at returning information to the processor, and it’s cheaper to run in terms of its demands on power and cooling.

Where the over-payment bit comes in is this (from the release): “More than 60% of those surveyed reported that these applications need only between 1,000 and 100,000 IOPS. Paying for an array built to deliver 1,000,000 IOPS to service an application that only needs 100,000 IOPS makes no sense when a hybrid array can service the same workload for a fraction of the cost.”

In other words, replacing spinning disks with flash means you’ve got more performance than you need, a claim justified by the assertion that only a small proportion of the data is being worked on at any one time. So, the logic goes, you store that hot data on flash for good performance but the rest can live on spinning disks, which are much cheaper to buy. In other words, don’t replace all your disks with flash, just a small proportion, depending on the size of your working data set.

It’s a so-called hybrid solution. And of course Tegile recommends you buy its tuned-up, all-in-one hybrid arrays which saves you the trouble of building your own.

Tegile is not alone in the field, with Pure Storage having recently launched in Europe. Pure uses ordinary consumer-grade disks, which should make it even cheaper although price comparisons are invariably difficult due to the ‘how long is a piece of string?’ problem.

There are other vendors too but I’ll leave you to find out who they are.

From a consumer point of view though, where’s the beef? There’s a good chance you’re already using a hybrid system if you use a recent desktop or laptop, as a number of hard disk manufacturers have taken to front-ending their mechanisms with flash to make them feel more responsive from a performance perspective.

Hard disks are not going away as the price per GB is falling just as quickly as it is for flash, although its characteristics are different. There will though come a time when flash disk capacities are big enough for ordinary use – just like my laptop – and everyone will get super-fast load times and longer battery life.

Assuming that laptops and desktops survive at all. But that’s another story for another time.

Advertisements

Filed under: Data centre, desktops, Laptop, Storage, Technology, , , , , ,

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Filed under: Business, Cloud computing, data protection, Enterprise, mobile, Networking, Product, Product launch, Security, Servers, Storage, Systems management, Technology, , , , , , , , , ,

Are SSDs too expensive?

Recent weeks have seen a deluge of products from solid-state disk (SSD) vendors, such as Tegile, Fusion-IO, and now LSI to name but a few; a significant proportion of new storage launches in the last year or two have been based around SSDs.

Some of this is no doubt opportunism, as the production of spinning disk media was seriously disrupted by floods in Thailand last year, a phenomenon that the disk industry reckons has now disappeared. Much of the SSD-fest though purports to resolve the problem of eking more performance from storage systems.

In your laptop or desktop PC, solid state makes sense simply because of its super-fast performance: you can boot the OS of your choice in 15-30 seconds, for example, and a laptop’s battery life is hugely extended. My ThinkPad now runs happily for four to five hours of continuous use, more if I watch a video or don’t interact with it constantly. And in a tablet or smartphone of course there’s no contest.

The problem is that the stuff is expensive, with a quick scan of retail prices showing a price delta of between 13 to 15 times the price of hard disks, measured purely on a capacity basis.

In the enterprise, though, things aren’t quite as simple as that. The vendors’ arguments in favour of SSDs ignore capacity, as they assume that the real problem is performance, where they can demonstrate that SSDs deliver more value for a given amount of cash than spinning media.

There is truth in this argument, but it’s not as if data growth is slowing down. In fact, when you consider that the next wave of data will come from sensors and what’s generally known as the Internet of things – or machine-to-machine communication – then expect the rate of data growth to increase, as this next data tsunami has barely started.

And conversations with both vendors and end users also show that capacity is not something that can be ignored. If you don’t have or can’t afford additional storage, you might need to do something drastic – like actually manage the stuff, although each time I’ve mooted that, I’m told that it remains more costly to do than technological fixes like thin provisioning and deduplication.

In practice, the vendors are, as so often happens in this industry, way ahead of all but the largest, most well-heeled customers. Most users, I would contend, more concerned with ensuring that they have enough storage to handle projected data growth over the next six months. Offer them high-cost, low capacity storage technology and they’re may well reject it in favour of capacity now.

When I put this point to him, LSI’s EMEA channel sales director Thomas Pavel reckoned that the market needed education. Maybe it does. Or maybe it’s just fighting to keep up with demand.

Filed under: Enterprise, Servers, Storage, , , , , , , ,

Manek’s twitter stream

Error: Twitter did not respond. Please wait a few minutes and refresh this page.