Big Tin

Big tin: IT infrastructure used by organisations to run their businesses. And other stuff too when I feel like it…

Is the cloud letting consumers down?

The promise of cloud services has, by and large, been fulfilled. Back in the day, and right up to the present day still, the big issue has been security: is your data safe?

What this question is really asking is whether you can retrieve your data quickly in the event of a technological melt-down. You know the kind of thing: an asteroid hits your business premises, a flood or fire makes your office unusable for weeks or months, or some form of weird glitch or malware makes your data unavailable, and you need to restore a backup to fix it.

All these scenarios are now pretty much covered by the main cloud vendors so, from a business perspective, what’s not to like?

Enter the consumer

Consumers – all of us, in other words – are also users of cloud services. Whether your phone uploads photos to the manufacturer’s cloud service, or you push terabytes of multimedia data up to a big provider’s facility, the cloud is integrated into everything that digital natives do.

The problem here is that, when it comes to cloud services, you get what you pay for. Enterprises will pay what it takes to get the level of service they want, whether it’s virtual machines for development purposes that can be quick and easy to set up and tear down, or business-critical applications that need precise configuration and multiple levels of redundancy.

Consumers on the other hand are generally unable to pay enterprise-level cash but an increasing number have built large multimedia libraries and see the cloud as a great way of backing up their data. Cloud providers have responded to this demand in various ways but the most common is a bait-and-switch offer.

Amazon’s policy changes provide the latest and arguably the most egregious example. In March 2015, it initiated, all for just £55 a year, an unlimited data storage service, not just photos as Google and others were already offering. Clearly many people saw this as a massive bargain and, although figures are not publicly available, many took it up.

Amazon dumps the deal

But in May 2017, just over two years later, Amazon announced that the deal was going to be changed, and subscribers would have to pay on a per-TB basis instead. This was after many subscribers – according to user forums – had uploaded dozens of terabytes over a period of months at painfully slow, asymmetrical data rates.

Now they are offered on a take it or leave it basis an expensive cloud service – costing perhaps three or four times more depending on data volumes – and a whole bunch of data that it will be difficult to migrate. On Reddit, many said they have given up on cloud providers and are instead investing in local storage.

This isn’t the first time such a move has been made by a cloud provider: bait the users in, then once they’re committed, switch the deal.

Can you trust the cloud?

While cloud providers are of course perfectly at liberty to change their terms and conditions according to commercial considerations, it’s hard to think of any other consumer service where such a major change in the T&Cs would be implemented because of the fear of user backlash. Especially by one of the largest global providers.

The message that Amazon’s move transmits is that cloud providers cannot be trusted, and that a deal that looks almost too good to be true will almost certainly turn out to be just so, even when it’s offered by a very large service provider who users might imagine would be more stable and reliable. That the switch comes at a time when storage costs continue to plummet makes it all the more surprising.

In its defence, Amazon said it will honour existing subscriptions until they expire, and only start deleting data 180 days after expiry.

That said, IT companies need to grow up. They’re not startups any more. If they offer a service and users in all good faith take them up on it, as the commercial managers at Amazon might have expected, they should deal with it in a way that doesn’t potentially have the effect of destroying faith and trust in cloud providers.

It’s not just consumers who are affected. It shouldn’t be forgotten that business people are also consumers and the cloud purchasing decisions they make are bound to be influenced to a degree by their personal experiences as well as by business needs, corporate policy and so on.

So from the perspective of many consumers, the answer to the question of whether you can trust the cloud looks pretty equivocal. The data might still be there but you can’t assume the service will continue along the same or similar lines as those you originally signed up to.

Can you trust the cloud? Sometimes.

Advertisements

Filed under: Cloud computing, Consumer, Storage, Technology

Storage roundup with PernixData, Arista Networks and Tarmin

There’s been a bit of a glut of storage announcements recently, so here’s a quick round-up the more interesting ones over recent weeks.

PernixData
This company is thinking ahead to a time when large proportions of servers in datacentres will have flash memory installed inside them. Right now, most storage is configured as a storage pool, connected via a dedicated storage network but this is sub-optimal for virtualised servers which generate large amounts of IOPS.

So instead, companies such as Fusion-io have developed flash memory systems for servers, so that data is local and so can be accessed much more quickly. This abandons one of the advantage of the storage network, namely storage sharing.

So PernixData has created FVP (Flash Virtualization Platform), a software shim that sits in the hypervisor and links the islands of data stored in flash memory inside each of the host servers. The way it works is to virtualise the server flash storage so it appears as a storage pool across physical hosts. Adding more flash to vSphere hosts – they have to be running VMware’s hypervisor – prompts FVP to expand the pool of flash. According to the company, it works irrespective of the storage vendor.

What this effectively does is to create a cache layer consisting of all the solid-state storage in the host pool that can boost the performance of reads and writes from and to the main storage network.

The company reckons that: “For the first time ever, companies can scale storage performance independent of storage capacity using server side flash.” And according to CEO Poojan Kumar: “We allow all virtual machines to use every piece of flash memory. The result is 17 times lower latency and 12 times more IOPS. It’s non-disruptive, it looks very simple and is easy to use.” It costs US$7,500 per physical host or US$10k for four hosts – a price point designed for smaller businesses.

It seems like a pretty good idea, and there’s some real-world testing info here.

Arista Networks
Also new on the hardware front are products from Arista Networks.

This company started life a few years ago with a set of high performance network switches that challenged the established players – such as Cisco and Juniper – by offering products that were faster, denser, and cheaper per port. Aimed at the high performance computing market, which includes users such as life science projects, geological data, and financial institutions, they were the beachhead to establish the company’s reputation, something it found easy given that its founders included Jayshree Ullal (ex-Cisco senior vice-president) and Andy Bechtolsheim (co-founder of Sun Microsystems).

I recently spoke to Doug Gourlay, Arista’s vice-president of systems engineering, about the new kit, which Gourlay reckoned mean that Arista “can cover 100% of the deployment scenarios that customers come to us with”. He sees the company’s strength as its software, which is claimed to be “self-healing and very reliable, with an open ecosystem and offering smart upgrades”.

The new products are the 7300 and 7250 switches, filling out the 7000 X Series which, the company claims, optimises costs, automates provisioning, and builds more reliable scale-out architectures.

The main use cases of the new systems are for those with large numbers of servers of small datacentres, and for dense, high performance computing render farms, according to Gourlay. They are designed for today’s flatter networks: where a traditional datacentre network used three layers, a modern fabric type of network will use just two layers to offer the fewest numbers of hops from one server to any other server. In Arista-speak, the switches attaching directly to the servers and directing traffic between them are leaves, while the core datacentre network is the spine.

The 7300 X series consists of three devices, with the largest, the 21U 7316 offering 16 line card slots with 2,048 10Gbps ports, or 512 40Gbps ports. Claimed throughput is 40Tbps. The other two in the series, the 7308 and 7304 accommodate eight and four linecards respectively, with decreases in size (21U and 8U) and throughput (20Tbps and 10Tbps).

The 2U, fixed configuration 7250QX-64 offers 64 40Gbps ports or 256 10Gbps ports, and a claimed throughput of up to 5Tbps. All systems and series offer reversible airflow for rack positioning flexibility and a claimed latency of two microseconds. Gourlay claimed this device offers the highest port density in the world.

Tarmin
Tarmin was punting its core product, Gridbank, at the SNW show. It’s an object storage system with bells on.

Organisations deploy object storage technology to manage very large volumes of unstructured data – typically at the petabyte scale and above. Such data is created not just by workers but more so from machines. Machine generated data comes from scientific instrumentation, including seismic and exploration equipment, genomic research tools and medical sensors, industrial sensors and meters, to cite just a few examples.

Most object storage systems restrict themselves to managing the data on disk, and leave other specialist systems such as analytics tools to extract meaningful insights from the morass of bits but what distinguishes Tarmin is that Gridbank “takes an end to end approach to the challenges of gaining value from data,” according to CEO Shahbaz Ali.

He said: “Object technologies provide metadata but we go further – we have an understanding of the data which means we index the content. This means we can analyse a media file in one of the 500 formats we support, and can deliver information about that content.”

In other words, said Ali: “Our key differentiator is that we’re not focused on the media like most storage companies, but the data – we aim to provide transparency and independence of data from media. We do data-defined storage.” He called this an integrated approach which means that organisations “don’t need an archiving solution, or a management solution” but can instead rely on Gridbank.

All that sounds well and good but one of the biggest obstacles to adoption has to be the single sourcing of a technology that aims to manage all your data. It also has very few reference sites (I could find just two on its website) so it appears that the number of organisations taking the Tarmin medicine is small.

There are also of course a number of established players in the markets that GridBank straddles, and it remains to be seen if an end-to-end solution is what organisations want, when integrating best of breed products avoids proprietary vendor lock-in, to which companies are more sensitive than ever and is more likely to prove better for performance and flexibility.

Filed under: Storage, Technology, , , , ,

Seagate’s new KOS disk drives aim to entice cloud builders

Among the most interesting conversations I had at the storage show SNW (aka Powering the Cloud) in Frankfurt this year was with Seagate’s European cloud initiatives director Joe Fagan, as we talked about the company’s proposed Kinetic Open Storage (KOS) drives.

The disk drive company is trying to move up the stack from what has become commodity hardware by converting its drives into servers. Instead of attaching using a SATA or SAS connector, Kinetic drives will have – a SATA or SAS connector, not an RJ45. But the data flowing inside the connector will be using IP not storage protocols, while the connector remains the same for compatibility purposes.

The aim is to help builders of large-scale infrastructures, such as cloud providers, to build denser, object-based systems by putting the server on the storage, rather than, to paraphrase Fagan, spending the energy on a Xeon or two per server along with a bunch of other hardware. Seagate argues that KOS could eliminate a layer of hardware between applications and storage, so data will flow from the application servers directly to storage rather than, as now, being translated into a variety of protocols before it hits the disk.

Fagan said two cloud builders were interested in the technology.

Behind this is, of course, a bid to grab some of the cash that enterprises and consumers are spending on cloud applications and services.

There are a few ‘howevers’, as you might imagine. Among the first is that every disk drive will need an IP address. This has huge implications for the network infrastructure and for network managers. Suddenly, there will be a lot more IP addresses to deal with, they will have to be subnetted and VLANned – did I mention that Kinetic drives will use IPV4? – and all this assumes you can summon up enough v4 addresses to start with.

Another concern is that mechanical disk drives fail relatively frequently while servers don’t, as of course they have no moving parts. So when a drive fails – and in large-scale deployments they surely will – you have to throw away the internal server too. Could be expensive.

And finally, there’s also a huge amount of inertia in the shape of today’s installed systems and the expertise needed to manage and operate them.

Is that enough to halt the initiative? Seagate clearly hopes not, and hopes too that other drive makers will come on board and develop their own versions in order to help validate the concept. It has provided APIs to help app developers exploit the concept.

As ever, time will tell. But will you find these drives in a server near you any time soon? Don’t hold your breath.

Filed under: Data centre, Storage, , , , , , ,

Seagate launches new solid-state disks (SSD)

Seagate, the biggest maker of hard disks, recently launched a new range of solid state disk drives, as it aims to align itself better with current buying trends.

In particular, the company’s new 600 SSD is aimed at laptop users who want to speed their boot and data access times. This is Seagate’s first foray into this market segment.

Claiming a 4x boot time improvement, Seagate said that SSD-stored data is safer if the laptop is dropped. From my own experience over the last five years of using using SSDs in laptops, I can confirm both this, and that their lower power consumption helps to improve battery life too.

The 600 SSD is available with up to 480GB and in multiple heights including 5mm, which the company says makes it “ideal for most ultra-thin devices as well as standard laptop systems”. The drive features up to 480GB of capacity and comes in a 2.5 form factor. It’s compatible with the latest 6Gbps SATA interface.

The other new SSD systems are aimed at enterprises. The most interesting of these is the X8 Accelerator, which is the result of Seagate’s investment in Virident, a direct competitor with Fusion-io, probably the best-known maker of directly-attached SSDs for servers. The Seagate product is also a PCIe card with a claimed IOPS of up to 1.1 million. The X8 offers up to 2.2TB in a half-height, half-length card.

Of the two other new drives, the 2.5-inch 480GB 600 Pro SSD and the 1200 Pro SSD, the first is targeted at cloud system builders, data centres, cloud service providers, content delivery networks, and virtualised environments, and is claimed to consume less power and so need less cooling. It consumes 2.8W, variable according to workload, which Seagate reckons is “the industry’s highest IOPS/watt”.

Up the performance scale is the 800GB 1200 Pro SSD, which is aimed at those needing high throughput. It attaches using dual-port 12Gbps SAS connectors and “uses algorithms that optimize performance for frequently accessed data by prioritizing which storage operations, reads or writes, occur first and optimizing where it is stored.”

Seagate said it buys its raw flash memory from Samsung and Toshiba but holds patents for its controller and system management technologies.

Filed under: Cloud computing, Data centre, Enterprise, Laptop, Storage, , , , , ,

Hard disks and flash storage will co-exist – for the moment

When it comes to personal storage, flash is now the default technology. It’s in your phone, tablet, camera, and increasingly in your laptop too. Is this about to change?

I’ve installed solid-state disks in my laptops for the last three or so years simply because it means they fire up very quickly and – more importantly – battery life is extended hugely. My Thinkpad now works happily for four or five hours while I’m using it quite intensively, where three hours used to be about the maximum.

The one downside is the price of the stuff. It remains stubbornly stuck at 10x or more the price per GB of spinning disks. When you’re using a laptop as I do, with most of my data in the cloud somewhere and only a working set kept on the machine, a low-end flash disk is big enough and therefore affordable: 120GB will store Windows and around 50GB of data and applications.

From a company’s point of view, the equation isn’t so different. Clearly, the volumes of data to be stored are bigger but despite the blandishments of those companies selling all-flash storage systems, many companies are not seeing the benefits. That’s according to one storage systems vendor which recently announced the results of an industry survey.

Caveat: industry surveys are almost always skewed because of sample size and/or the types of questions asked, so the results need to be taken with a pinch – maybe more – of salt.

Tegile Systems reckons that 99 percent of SME and enterprise users who are turning to solid state storage will overpay. They’re buying more than they need, the survey finds, at least according to the press release, which wastes no time by mentioning in its second paragraph that the company penning the release just happens to have the solution. So shameless!

Despite that, I think Tegile is onto something. Companies are less sensitive to the price per GB than they are to the price / performance ratio, usually expressed in IOPS, which is where solid-state delivers in spades. It’s much quicker than spinning disks at returning information to the processor, and it’s cheaper to run in terms of its demands on power and cooling.

Where the over-payment bit comes in is this (from the release): “More than 60% of those surveyed reported that these applications need only between 1,000 and 100,000 IOPS. Paying for an array built to deliver 1,000,000 IOPS to service an application that only needs 100,000 IOPS makes no sense when a hybrid array can service the same workload for a fraction of the cost.”

In other words, replacing spinning disks with flash means you’ve got more performance than you need, a claim justified by the assertion that only a small proportion of the data is being worked on at any one time. So, the logic goes, you store that hot data on flash for good performance but the rest can live on spinning disks, which are much cheaper to buy. In other words, don’t replace all your disks with flash, just a small proportion, depending on the size of your working data set.

It’s a so-called hybrid solution. And of course Tegile recommends you buy its tuned-up, all-in-one hybrid arrays which saves you the trouble of building your own.

Tegile is not alone in the field, with Pure Storage having recently launched in Europe. Pure uses ordinary consumer-grade disks, which should make it even cheaper although price comparisons are invariably difficult due to the ‘how long is a piece of string?’ problem.

There are other vendors too but I’ll leave you to find out who they are.

From a consumer point of view though, where’s the beef? There’s a good chance you’re already using a hybrid system if you use a recent desktop or laptop, as a number of hard disk manufacturers have taken to front-ending their mechanisms with flash to make them feel more responsive from a performance perspective.

Hard disks are not going away as the price per GB is falling just as quickly as it is for flash, although its characteristics are different. There will though come a time when flash disk capacities are big enough for ordinary use – just like my laptop – and everyone will get super-fast load times and longer battery life.

Assuming that laptops and desktops survive at all. But that’s another story for another time.

Filed under: Data centre, desktops, Laptop, Storage, Technology, , , , , ,

Technology highlights 2013

I’ve been shamefully neglecting this blog recently, yet a lot of interesting new technologies and ideas have come my way. So by way of making amends, here’s quick round-up of the highlights.

Nivio
This is a company that delivers a virtual desktop service with a difference. Virtual desktops have been a persistent topic of conversation among IT managers for years, yet delivery has always been some way off. Bit like fusion energy only not as explosive.

The problem is that, unless you’re serving desktops to people who do a single task all day, which describes call centre workers but not most people, people expect a certain level of performance and customisation from their desktops. If you’re going to take a desktop computer away from someone who uses it intensively as a tool, you’d better make sure that the replacement technology is just as interactive.

Desktops provided by terminal services have tended to be slow and a bit clunky – and there’s no denying that Nivio’s virtual desktop service, which I’ve tried, isn’t quite as snappy as having 3.4GHz of raw compute power under your fingertips.

On the other hand, there’s a load of upsides. From an IT perspective, you don’t need to provide the frankly huge amounts of bandwidth needed to service multiple desktops. You don’t care what the end user wants to access the service with – so if you’re allowing people to bring and use their own devices into work, this will work with anything, needing only a browser to work. I’ve seen a Windows desktop running on an iPhone – scary…

And you don’t need to buy applications. The service provides them all for you from its standard set of over 40 applications – and if you need one the company doesn’t currently offer, they’ll supply it. Nivio also handles data migration, patching, and the back-end hardware.

All you need to do is hand over $35 per month per user.

Quantum
The company best known for its tape backup products launched a new range of tape libraries.

The DXi6800 is, says Quantum’s Stéphane Estevez, three times more scalable than any other such device, allowing you to scale from 13TB to 156TB. Aimed at mid-sized as well as large enterprises, it includes an array of disks that you effectively switch on with the purchase of a new licence. Until then, they’re dormant, not spinning. “We are taking a risk of shipping more disks than the customer is paying for – but we know customer storage is always growing. You unlock the extra storage when you need it,” said Estevez.

It can handle up to 16TB/hour which, is, reckons the company, four times faster than EMC’s DD670 – its main competitor – and all data is encrypted and protected by an electronic certificate so you can’t simply swap it into another Quantum library. And the management tools mean that you can manage multiple devices across datacentres.

Storage Fusion
If ever you wanted to know at a deep level how efficient your storage systems are, especially when it comes to virtual machine management, then Storage Fusion reckons it has the answers in the form of its storage analysis software, Storage Fusion Analyze.

I spoke to Peter White, Storage Fusion’s operations director, who reckoned that companies are wasting storage capacity by not over-provisioning enough, and by leaving old snapshots and storage allocated to servers that no longer exist.

“Larger enterprise environments have the most reclaimable storage because they’re uncontrolled,” White said, “while smaller systems are better controlled.”

Because the company’s software has analysed large volumes of storage, White was in a position to talk about trends in storage usage.

For example, most companies have 25% capacity headroom, he said. “Customers need that level of comfort zone. Partners and end users say that the reason is because the purchasing process to get disk from purchase order to installation can take weeks or even months, so there’s a buffer built in. Best practice is around that level but you could go higher.”

You also get what White called system losses, due to formatting inefficiencies and OS storage. “And generally processes are often broken when it comes to decommissioning – without processes, there’s an assumption of infinite supply which leads to infinite demand and a lot of wastage.”

The sister product, Storage Fusion Virtualize “allows us to shine a torch into VMware environments,” White said. “It can see how VM storage is being used and consumed. It offers the same fast analysis, with no agents needed.”

Typical customers include not so much enterprises as systems integrators, service providers and consultants.

“We are complementary to main storage management tools such as those from NetApp and EMC,” White said. “Vendors take a global licence, and end users can buy via our partners – they can buy report packs to run it monthly or quarterly, for example.”

Solidfire
Another product aimed at service providers, SolidFire steps aside from the usual pitch for all solid-state disks (SSD). Yes solid-state is very fast when compared to spinning media but the company claims to be offering the ability to deliver a guarantee not just of uptime but of performance.

If you’re a provider of storage services in the cloud, one of your main problems, said the company’s Jay Prassl, is the noisy neighbour, the one tenant in a multi-tenant environment who sucks up all the storage performance with a single database call. This leaves the rest of the provider’s customers suffering from a poor response, leading to trouble tickets and support calls, so adding to the provider’s costs.

The aim, said Prassl, is to help service providers offer guarantees to enterprises they currently cannot offer because the technology hasn’t – until now – allowed it. “The cloud provider’s goal is to compute all the customer’s workload but high-performance loads can’t be deployed in the cloud right now,” he said.

So the company has built SSD technology that, because of the way that data is distributed across multiple solid-state devices – I hesitate to call them disks because they’re not – offers predictable latency.

“Some companies manage this by keeping few people on a single box but it’s a huge problem when you have hundreds or thousands of tenants,” Prassl said. “So service providers can now write a service level agreement (SLA) around performance, and they couldn’t do that before.”

Key to this is the automated way that the system distributes the data around the company’s eponymous storage systems, according to Prassl. It then sets a level of IOPS that a particular volume can achieve, and the service provider can then offer a performance SLA around it. “What we do for every volume is dictate a minimum, maximum and a burst level of performance,” he said. “It’s not a bolt-on but an architecture at the core of our work.”

Filed under: Business, Cloud computing, Data centre, desktops, Enterprise, Product, Product launch, Servers, Storage, Systems management, , , ,

2012: the tech year in view (part 2)

Here’s part 2 of my round-up of some of the more interesting news stories that came my way in 2012. Part 1 was published on 28 December 2012.

Datacentre infrastructure
NextIO impressed with its network consolidation product, vNet. This device virtualises the I/O of all the data to and from servers in a rack, so that they can share the bandwidth resource which is allocated according to need. It means that one adapter can look like multiple virtual adapters for sharing between both physical and virtual servers, with each virtual adapter looking like a physical adapter to each server. The main beneficiaries, according to the company, are cloud providers, who can then add more servers quickly and easily without having to physically reconfigure their systems and cables. According to the company, a typical virtualisation host can be integrated into the datacentre in minutes as opposed to hours.

In the same part of the forest, the longer-established Xsigo launched a new management layer for its Data Center Fabric appliances, its connectivity virtualisation products. This allows you to see all I/O traffic across all the servers, any protocol, and with a granularity that ranges from specific ports to entire servers.

Nutanix came up with a twist on virtualisation by cramming all the pieces you need for a virtualisation infrastructure into a single box. The result, says the company, is a converged virtualisation appliance that allows you to build a datacentre with no need for separate storage systems. “Our mission is to make virtualisation simple by eliminating the need for network storage,” reckons the company. Its all-in-one appliances mean faster setup and reduced hardware expenditure, the company claims. However, like any do-it-all device, its desirability depends on how much you value the ability to customise over ease of use and setup. Most tend to prefer separates so they can pick and choose.

Cooling servers is a major problem: it costs money and wastes energy that could be more usefully employed doing computing. This is why Iceotope has developed a server that’s entirely enclosed and filled with an inert liquid: 3M Novec 7300. This convects heat away from heat-generating components and is, according to chemical giant 3M, environmentally friendly and thermally stable. The fluid needs no pumping, instead using convection currents to transport heat and dump it to a water-filled radiator. The water is pumped but, Iceotope says, you need only a 72W pump for a 20kW cabinet of servers, a far cry from a typical 1:1 ratio of cooling energy to compute power when using air as the transmission medium.

Networking
Vello Systems launched its Data Center Gateway incorporating VellOS, its operating system designed for software-defined networking (SDN) – probably the biggest revolution in network technology over the last decade. The box is among the first SDN products – as opposed to vapourware – to emerge. The OS can manage not just Vello’s own products but other SDN compliant systems too.

Cloud computing
One of the highlights of my cloud computing year was a visit to Lille, to see one of OVH‘s datacentres. One of France’s biggest cloud providers, OVH is unusual in that it builds everything itself from standard components. You’ll find no HP, IBM or Dell servers here, just bare Supermicro motherboards in open trays, cooled by fresh air. The motivation, says the company comes from thefact there are no external investors and a high level of technical and engineering expertise at the top. Effectively, the company does it this way because it has the resources to do so, and “because we are techies and it’s one of our strong values.” The claimed benefit is lower costs for its customers.

I had an interesting discussion with Martino Corbelli, the chief customer officer at Star, a UK-based cloud services provider. He said that the UK’s mid-market firms are getting stuck in bad relationships with cloud services vendors because they lack both the management and negotiation skills required to handle issues and the budget to cover the cost of swapping out.

“The industry for managed services and cloud is full of examples of people who over promise and under deliver and don’t meet expectations,” he said, reckoning that discussions with potential customers now revolve more around business issues than technology. “Now it’s about the peer-to-peer relationship,” he said. “Can you trust them, are you on the same wavelength, do you feel that your CFO can call their CFO and talk to them as equals?”

We also saw the launch of new cloud computing providers and services from mid-market specialist Dedipower, CloudBees with a Java-based platform service, and Doyenz with a disaster recovery service aimed at smaller businesses.

Storage
Coraid boasted of attracting over 1,500 customers for its unique ATA-over-Ethernet (AoE) storage products. This means that storage is using native Ethernet rather than storage-specific protocols. Coraid reckons this reduces protocol overheads and so is three to five times faster than iSCSI. The company makes a range of storage systems but, although AoE is an open standard, no other company is designing and selling products with it.

WhipTail joined the growing list of vendors selling all-flash storage systems with its Accela products. Solid-state gives you huge performance advantages but the raw storage (as opposed to the surrounding support infrastructure) costs ten times as much compared to spinning disks, so the value proposition is that the added performance allows you to make more money.

Eventually, the bulk of storage will be solid-state, as the price comes down, with disk relegated to storing backups, archives and low-priority data, but that time has yet to come. It’s a delicate balancing operation for companies such as WhipTail and Violin Memory: they don’t want to be too far ahead of the mass market and don’t want to miss the boat when flash storage becomes mainstream.

Filed under: Business, Cloud computing, Data centre, Enterprise, Networking, operating systems, Product launch, Storage, Systems management, Technology, , , , , , , , , , , , ,

2012: the tech year in view (part 1)

As 2012 draws to a close, here’s a round-up of some of the more interesting news stories that came my way this year. This is part 1 of 2 – part 2 will be posted on Monday 31 December 2012.

Storage
Virsto, a company making software that boosts storage performance by sequentialising the random data streams from multiple virtual machines, launched Virsto for vSphere 2.0. According to the company, this adds features for virtual desktop infrastructures (VDI), and it can lower the cost of providing storage for each desktop by 50 percent. The technology can save money because you need less storage to deliver sufficient data throughput, says Virsto.

At the IPExpo show, I spoke with Overland which has added a block-based product called SnapSAN to its portfolio. According to the company, the SnapSAN 3000 and 5000 offer primary storage using SSD for cacheing or auto-tiering. This “moves us towards the big enterprise market while remaining simple and cost-effective,” said a spokesman. Also, Overland’s new SnapServer DX series now includes dynamic RAID, which works somewhat like Drobo’s system in that you can install differently sized disks into the array and still use all the capacity.

Storage startup Tegile is one of many companies making storage arrays with both spinning and solid-state disks to boost performance and so, the company claims boost performance cost-effectively. Tegile claims it reduces data aggressively, using de-duplication and compression, and so cuts the cost of the SSD overhead. Its main competitor is Nimble Storage.

Nimble itself launched a so-called ‘scale to fit’ architecture for its hybrid SSD-spinning disk arrays this year, adding a rack of expansion shelves that allows capacity to be expanded. It’s a unified approach, says the company, which means that adding storage doesn’t mean you need to perform a lot of admin moving data around.

Cloud computing
Red Hat launched OpenShift Enterprise, a cloud-based platform service (PaaS). This is, says Red Hat, a solution for developers to launch new projects, including a development toolkit that allows you to quickly fire up new VM instances. Based on SE Linux, you can fire up a container and get middleware components such as JBoss, php, and a wide variety of languages. The benefits, says the company, are that the system allows you to pool your development projects.

Red Hat also launched Enterprise Virtualization 3.1, a platform for hosting virtual servers with up to 160 logical CPUs and up to 2TB of memory per virtual machine. It adds command line tools for administrators, and features such as RESTful APIs, a new Python-based software development kit, and a bash shell. The open source system includes a GUI to allow you to manage hundreds of hosts with thousands of VMs, according to Red Hat.

HP spoke to me at IPExpo about a new CGI rendering system that it’s offering as a cloud-based service. According to HP’s Bristol labs director, it’s 100 percent automated and autonomic. It means that a graphics designer uses a framework to send a CGI job to a service provider who creates the film frame. The service works by estimating the number of servers required, sets them up and configures them automatically in just two minutes, then tears them down after delivery of the video frames. The evidence that it works can apparently be seen in the animated film Madagascar where, to make the lion’s mane move realistically, calculations were needed for 50,000 individual hairs.

For the future, HP Labs is looking at using big data and analytics for security purposes and is looking at providing an app store for analytics as a service.

Security
I also spoke with Rapid7, an open-source security company that offers a range of tools for companies large and small to control and manage the security of their digital assets. It includes a vulnerability scanner, Nexpose, a penetration testing tool, Metasploit, and Mobilisafe, a tool for mobile devices that “discovers, identifies and eliminates risks to company data from mobile devices”, according to the company. Overall, the company aims to provide “solutions for comprehensive security assessments that enable smart decisions and the ability to act effectively”, a tall order in a crowded security market.

I caught up with Druva, a company that develops software to protect mobile devices such as smartphones, laptops and tablets. Given the explosive growth in the numbers of end-user owned devices in companies today, this company has found itself in the right place at the right time. New features added to its flagship product inSync include better usability and reporting, with the aim of giving IT admins a clearer idea of what users are doing with their devices on the company network.

Networking
Enterasys – once Cabletron for the oldies around here – launched a new wireless system, IdentiFi. The company calls it wireless with embedded intelligence offering wired-like performance but with added security. The system can identify issues of performance and identity, and user locations, the company says, and it integrates with Enterasys’ OneFabric network architecture that’s managed using a single database.

Management
The growth of virtualisation in datacentres has resulted in a need to manage the virtual machines, so a number of companies focusing on this problem have sprung up. Among them is vKernel, whose product vOPS Server aims to be a tool for admins that’s easy to use; experts should feel they have another pair of hands to help them do stuff, was how one company spokesman put it. The company, now owned by Dell, claims it has largest feature set for virtualisation management when you include its vKernel and vFoglight products, which provide analysis, advice and automation of common tasks.

Filed under: Business, Cloud computing, data protection, Enterprise, mobile, Networking, Product, Product launch, Security, Servers, Storage, Systems management, Technology, , , , , , , , , ,

Technology predictions for 2013

The approaching end of the year marks the season of predictions for and by the technology industry for the next year, or three years, or decade. These are now flowing in nicely, so I thought I’d share some of mine.

Shine to rub off Apple
I don’t believe that the lustre that attaches to everything Apple does will save it from the ability of its competitors to do pretty much everything it does, but without the smugness. Some of this was deserved when it was the only company making smartphones, but this is no longer true. and despite the success of the iPhone 5, I wonder if its incremental approach – a slightly bigger screen and some nice to have features – will be enough to satisfy in the medium term. With no dictatorial obsessive at the top of a company organised and for around that individual’s modus operandi, can Apple make awesome stuff again, but in a more collective way?

We shall see, but I’m not holding my breath.

Touch screens
Conventional wisdom says that touchscreens only work when they are either horizontal and/or attached to a handheld device. It must be true: Steve Jobs said so. But have you tried using a touchscreen laptop? Probably not.

One reviewer has, though, and he makes a compelling case for them, suggesting that they don’t lead to gorilla arm, after all. I’m inclined to agree that a touchscreen laptop could become popular, as they share a style of interaction with users’ phones – and they’re just starting to appear. Could Apple’s refusal to make a touchscreen MacBook mean it’s caught wrong-footed on this one?

I predict that touchscreen laptops will become surprisingly popular.

Windows 8
Everyone’s a got a bit of a downer on Windows 8. After all, it’s pretty much Windows 7 but with a touchscreen interface slapped on top. Doesn’t that limit its usefulness? And since enterprises are only now starting to upgrade from Windows XP to Windows 7 — and this might be the last refresh cycle that sees end users being issued with company PCs — doesn’t that spell the end for Windows 8?

I predict that it will be more successful than many think: not because it’s especially great because it certainly has flaws, especially when used with a mouse, which means learning how to use the interface all over again.

In large part, this is because the next version of Windows won’t be three years away or more, which has tended to be the release cycle of new versions. Instead, Microsoft is aiming for a series of smaller, point releases, much as Apple does but hopefully without the annoying animal names from which it’s impossible to derive an understanding of whether you’ve got the latest version.

So Windows Blue – the alleged codename – is the next version and will take into account lessons from users’ experiences with Windows 8, and take account of the growth in touchscreens by including multi-touch. And it will be out in 2013, probably the third quarter.

Bring your own device
The phenomenon whereby firms no longer provide employees with a computing device but instead allow you to bring your own, provided it fulfils certain security requirements, will blossom.

IT departments hate this bring your own device policy because it’s messy and inconvenient but they have no choice. They had no choice from the moment the CEO walked into the IT department some years ago with his shiny new iPhone – he was the first because he was the only one able to afford one at that point – and commanded them to connect it to the company network. They had to comply and, once that was done, the floodgates opened. The people have spoken.

So if you work for an employer, expect hot-desking and office downsizing to continue as the austerity resulting from the failed economic policies of some politicians continue to be pursued, in the teeth of evidence of their failure.

In the datacentre
Storage vendors will be snapped up by the deep-pocketed big boys – especially Dell and HP – as they seek to compensate for their mediocre financial performance by buying companies producing new technologies, such as solid-state disk caching and tiering.

Datacentres will get bigger as cloud providers amalgamate, and will more or less be forced to consider and adopt software-defined networking (SDN) to manage their increasingly complex systems. SDN promises to do that by virtualising the network, in the same way as the other major datacentre elements – storage and computing – have already been virtualised.

And of course, now that virtualisation is an entirely mainstream technology, we will see even bigger servers hosting more complex and mission-critical applications such as transactional databases, as the overhead imposed by virtualisation shrinks with each new generation of technology. What is likely to lag however is the wherewithal to manage those virtualised systems, so expect to see some failures as virtual servers go walkabout.

Security
Despite the efforts of technologists to secure systems – whether for individuals or organisations, security breaches will continue unabated. Convenience trumps security every time, experience teaches us. And this means that people will find increasingly ingenious ways around technology designed to stop them walking around with the company’s customer database on a USB stick in their pocket, or exposing the rest of the world to a nasty piece of malware because they refuse to update their operating system’s defences.

That is, of course, not news at all, sadly.

Filed under: Cloud computing, Consumer, data protection, desktops, Enterprise, Laptop, mobile, Networking, operating systems, Product, Security, Servers, Storage, Technology, , , , , , , , , , ,

Are SSDs too expensive?

Recent weeks have seen a deluge of products from solid-state disk (SSD) vendors, such as Tegile, Fusion-IO, and now LSI to name but a few; a significant proportion of new storage launches in the last year or two have been based around SSDs.

Some of this is no doubt opportunism, as the production of spinning disk media was seriously disrupted by floods in Thailand last year, a phenomenon that the disk industry reckons has now disappeared. Much of the SSD-fest though purports to resolve the problem of eking more performance from storage systems.

In your laptop or desktop PC, solid state makes sense simply because of its super-fast performance: you can boot the OS of your choice in 15-30 seconds, for example, and a laptop’s battery life is hugely extended. My ThinkPad now runs happily for four to five hours of continuous use, more if I watch a video or don’t interact with it constantly. And in a tablet or smartphone of course there’s no contest.

The problem is that the stuff is expensive, with a quick scan of retail prices showing a price delta of between 13 to 15 times the price of hard disks, measured purely on a capacity basis.

In the enterprise, though, things aren’t quite as simple as that. The vendors’ arguments in favour of SSDs ignore capacity, as they assume that the real problem is performance, where they can demonstrate that SSDs deliver more value for a given amount of cash than spinning media.

There is truth in this argument, but it’s not as if data growth is slowing down. In fact, when you consider that the next wave of data will come from sensors and what’s generally known as the Internet of things – or machine-to-machine communication – then expect the rate of data growth to increase, as this next data tsunami has barely started.

And conversations with both vendors and end users also show that capacity is not something that can be ignored. If you don’t have or can’t afford additional storage, you might need to do something drastic – like actually manage the stuff, although each time I’ve mooted that, I’m told that it remains more costly to do than technological fixes like thin provisioning and deduplication.

In practice, the vendors are, as so often happens in this industry, way ahead of all but the largest, most well-heeled customers. Most users, I would contend, more concerned with ensuring that they have enough storage to handle projected data growth over the next six months. Offer them high-cost, low capacity storage technology and they’re may well reject it in favour of capacity now.

When I put this point to him, LSI’s EMEA channel sales director Thomas Pavel reckoned that the market needed education. Maybe it does. Or maybe it’s just fighting to keep up with demand.

Filed under: Enterprise, Servers, Storage, , , , , , , ,

Manek’s twitter stream